WorldWideScience

Sample records for video oculography system

  1. A low-cost video-oculography system for vestibular function testing.

    Science.gov (United States)

    Jihwan Park; Youngsun Kong; Yunyoung Nam

    2017-07-01

    In order to remain in focus during head movements, vestibular-ocular reflex causes eyes to move in the opposite direction to head movement. Disorders of vestibular system decrease vision, causing abnormal nystagmus and dizziness. To diagnose abnormal nystagmus, various studies have been reported including the use of rotating chair tests and videonystagmography. However, these tests are unsuitable for home use due to their high costs. Thus, a low-cost video-oculography system is necessary to obtain clinical features at home. In this paper, we present a low-cost video-oculography system using an infrared camera and Raspberry Pi board for tracking the pupils and evaluating a vestibular system. Horizontal eye movement is derived from video data obtained from an infrared camera and infrared light-emitting diodes, and the velocity of head rotation is obtained from a gyroscope sensor. Each pupil was extracted using a morphology operation and a contour detection method. Rotatory chair tests were conducted with our developed device. To evaluate our system, gain, asymmetry, and phase were measured and compared with System 2000. The average IQR errors of gain, phase and asymmetry were 0.81, 2.74 and 17.35, respectively. We showed that our system is able to measure clinical features.

  2. Simultaneous recordings of ocular microtremor and microsaccades with a piezoelectric sensor and a video-oculography system

    Directory of Open Access Journals (Sweden)

    Michael B. McCamy

    2013-02-01

    Full Text Available Our eyes are in continuous motion. Even when we attempt to fix our gaze, we produce so called “fixational eye movements”, which include microsaccades, drift, and ocular microtremor (OMT. Microsaccades, the largest and fastest type of fixational eye movement, shift the retinal image from several dozen to several hundred photoreceptors and have equivalent physical characteristics to saccades, only on a smaller scale (Martinez-Conde, Otero-Millan & Macknik, 2013. OMT occurs simultaneously with drift and is the smallest of the fixational eye movements (∼1 photoreceptor width, >0.5 arcmin, with dominant frequencies ranging from 70 Hz to 103 Hz (Martinez-Conde, Macknik & Hubel, 2004. Due to OMT’s small amplitude and high frequency, the most accurate and stringent way to record it is the piezoelectric transduction method. Thus, OMT studies are far rarer than those focusing on microsaccades or drift. Here we conducted simultaneous recordings of OMT and microsaccades with a piezoelectric device and a commercial infrared video tracking system. We set out to determine whether OMT could help to restore perceptually faded targets during attempted fixation, and we also wondered whether the piezoelectric sensor could affect the characteristics of microsaccades. Our results showed that microsaccades, but not OMT, counteracted perceptual fading. We moreover found that the piezoelectric sensor affected microsaccades in a complex way, and that the oculomotor system adjusted to the stress brought on by the sensor by adjusting the magnitudes of microsaccades.

  3. Evaluation of Quantitative Head Impulse Testing Using Search Coils Versus Video-oculography in Older Individuals

    Science.gov (United States)

    Agrawal, Yuri; Schubert, Michael C.; Migliaccio, Americo A.; Zee, David S.; Schneider, Erich; Lehnen, Nadine; Carey, John P.

    2015-01-01

    Objective To evaluate the validity of 2D video-oculography (VOG) compared with scleral search coils for horizontal AVOR gain estimation in older individuals. Study Design Cross-sectional validation study. Setting Tertiary care academic medical center. Patients Six individuals age 70 and older. Interventions Simultaneous eye movement recording with scleral search coil (over right eye) and EyeSeeCam VOG camera (over left eye) during horizontal head impulses. Main Outcome Measures Best estimate search coil and VOG horizontal AVOR gain, presence of compensatory saccades using both eye movement recording techniques. Results We observed a significant correlation between search coil and VOG best estimate horizontal AVOR gain (r = 0.86, p = 0.0002). We evaluated individual head impulses and found that the shapes of the head movement and eye movement traces from the coil and VOG systems were similar. Specific features of eye movements seen in older individuals, including overt and covert corrective saccades and anticompensatory eye movements, were captured by both the search coil and VOG systems. Conclusion These data suggest that VOG is a reasonable proxy for search coil eye movement recording in older subjects to estimate VOR gain and the approximate timing of corrective eye movements. VOG offers advantages over the conventional search coil method; it is portable and easy to use, allowing for quantitative VOR estimation in diverse settings such as a routine office-based practice, at the bedside, and potentially in larger scale population analyses. PMID:24080977

  4. Fast vergence eye movements are disrupted in Parkinson's disease: A video-oculography study.

    Science.gov (United States)

    Hanuška, Jaromír; Bonnet, Cecilia; Rusz, Jan; Sieger, Tomáš; Jech, Robert; Rivaud-Péchoux, Sophie; Vidailhet, Marie; Gaymard, Bertrand; Růžička, Evžen

    2015-07-01

    Blurred near vision is a common non-motor symptom in patients with Parkinson's disease (PD), however detailed characterization of vergence eye movements (VEM) is lacking. Convergence and divergence were examined in 18 patients with PD and 18 control subjects using infrared video-oculography. VEM metrics analyzed included latency, velocity and accuracy, in vertical and horizontal planes. The latency of convergence and divergence was significantly increased in PD subjects. Additionally, divergence was slow and hypometric, while other convergence metrics were similar to controls. We provide evidence in favor of disrupted VEM in PD. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. High-Speed Video-Oculography for Measuring Three-Dimensional Rotation Vectors of Eye Movements in Mice.

    Science.gov (United States)

    Imai, Takao; Takimoto, Yasumitsu; Takeda, Noriaki; Uno, Atsuhiko; Inohara, Hidenori; Shimada, Shoichi

    2016-01-01

    The mouse is the most commonly used animal model in biomedical research because of recent advances in molecular genetic techniques. Studies related to eye movement in mice are common in fields such as ophthalmology relating to vision, neuro-otology relating to the vestibulo-ocular reflex (VOR), neurology relating to the cerebellum's role in movement, and psychology relating to attention. Recording eye movements in mice, however, is technically difficult. We developed a new algorithm for analyzing the three-dimensional (3D) rotation vector of eye movement in mice using high-speed video-oculography (VOG). The algorithm made it possible to analyze the gain and phase of VOR using the eye's angular velocity around the axis of eye rotation. When mice were rotated at 0.5 Hz and 2.5 Hz around the earth's vertical axis with their heads in a 30° nose-down position, the vertical components of their left eye movements were in phase with the horizontal components. The VOR gain was 0.42 at 0.5 Hz and 0.74 at 2.5 Hz, and the phase lead of the eye movement against the turntable was 16.1° at 0.5 Hz and 4.88° at 2.5 Hz. To the best of our knowledge, this is the first report of this algorithm being used to calculate a 3D rotation vector of eye movement in mice using high-speed VOG. We developed a technique for analyzing the 3D rotation vector of eye movements in mice with a high-speed infrared CCD camera. We concluded that the technique is suitable for analyzing eye movements in mice. We also include a C++ source code that can calculate the 3D rotation vectors of the eye position from two-dimensional coordinates of the pupil and the iris freckle in the image to this article.

  6. Video Editing System

    Science.gov (United States)

    Schlecht, Leslie E.; Kutler, Paul (Technical Monitor)

    1998-01-01

    This is a proposal for a general use system based, on the SGI IRIS workstation platform, for recording computer animation to videotape. In addition, this system would provide features for simple editing and enhancement. Described here are a list of requirements for the system, and a proposed configuration including the SGI VideoLab Integrator, VideoMedia VLAN animation controller and the Pioneer rewritable laserdisc recorder.

  7. Robotic video photogrammetry system

    Science.gov (United States)

    Gustafson, Peter C.

    1997-07-01

    For many years, photogrammetry has been in use at TRW. During that time, needs have arisen for highly repetitive measurements. In an effort to satisfy these needs in a timely manner, a specialized Robotic Video Photogrammetry System (RVPS) was developed by TRW in conjunction with outside vendors. The primary application for the RVPS has strict accuracy requirements that demand significantly more images than the previously used film-based system. The time involved in taking these images was prohibitive but by automating the data acquisition process, video techniques became a practical alternative to the more traditional film- based approach. In fact, by applying video techniques, measurement productivity was enhanced significantly. Analysis involved was also brought `on-board' to the RVPS, allowing shop floor acquisition and delivery of results. The RVPS has also been applied in other tasks and was found to make a critical improvement in productivity, allowing many more tests to be run in a shorter time cycle. This paper will discuss the creation of the system and TRW's experiences with the RVPS. Highlighted will be the lessons learned during these efforts and significant attributes of the process not common to the standard application of photogrammetry for industrial measurement. As productivity and ease of use continue to drive the application of photogrammetry in today's manufacturing climate, TRW expects several systems, with technological improvements applied, to be in use in the near future.

  8. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  9. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  10. Video systems for alarm assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greenwoll, D.A.; Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States)); Ebel, P.E. (BE, Inc., Barnwell, SC (United States))

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs.

  11. Photosensor Oculography: Survey and Parametric Analysis of Designs using Model-Based Simulation

    OpenAIRE

    Rigas, Ioannis; Raffle, Hayes; Komogortsev, Oleg V.

    2017-01-01

    This paper presents a renewed overview of photosensor oculography (PSOG), an eye-tracking technique based on the principle of using simple photosensors to measure the amount of reflected (usually infrared) light when the eye rotates. Photosensor oculography can provide measurements with high precision, low latency and reduced power consumption, and thus it appears as an attractive option for performing eye-tracking in the emerging head-mounted interaction devices, e.g. augmented and virtual r...

  12. Intelligent network video understanding modern video surveillance systems

    CERN Document Server

    Nilsson, Fredrik

    2008-01-01

    Offering ready access to the security industry's cutting-edge digital future, Intelligent Network Video provides the first complete reference for all those involved with developing, implementing, and maintaining the latest surveillance systems. Pioneering expert Fredrik Nilsson explains how IP-based video surveillance systems provide better image quality, and a more scalable and flexible system at lower cost. A complete and practical reference for all those in the field, this volume:Describes all components relevant to modern IP video surveillance systemsProvides in-depth information about ima

  13. Smart sensing surveillance video system

    Science.gov (United States)

    Hsu, Charles; Szu, Harold

    2016-05-01

    An intelligent video surveillance system is able to detect and identify abnormal and alarming situations by analyzing object movement. The Smart Sensing Surveillance Video (S3V) System is proposed to minimize video processing and transmission, thus allowing a fixed number of cameras to be connected on the system, and making it suitable for its applications in remote battlefield, tactical, and civilian applications including border surveillance, special force operations, airfield protection, perimeter and building protection, and etc. The S3V System would be more effective if equipped with visual understanding capabilities to detect, analyze, and recognize objects, track motions, and predict intentions. In addition, alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. The S3V System capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. It would be directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.

  14. Maximizing Resource Utilization in Video Streaming Systems

    Science.gov (United States)

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  15. Intelligent Model for Video Survillance Security System

    Directory of Open Access Journals (Sweden)

    J. Vidhya

    2013-12-01

    Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.

  16. Video surveillance system based on MPEG-4

    Science.gov (United States)

    Ge, Jing; Zhang, Guoping; Yang, Zongkai

    2008-03-01

    Multimedia technology and networks protocol are the basic technology of the video surveillance system. A network remote video surveillance system based on MPEG-4 video coding standards is designed and implemented in this paper. The advantages of the MPEG-4 are analyzed in detail in the surveillance field, and then the real-time protocol and real-time control protocol (RTP/RTCP) are chosen as the networks transmission protocol. The whole system includes video coding control module, playing back module, network transmission module and network receiver module The scheme of management, control and storage about video data are discussed. The DirectShow technology is used to playback video data. The transmission scheme of digital video processing in networks, RTP packaging of MPEG-4 video stream is discussed. The receiver scheme of video date and mechanism of buffer are discussed. The most of the functions are archived by software, except that the video coding control module is achieved by hardware. The experiment results show that it provides good video quality and has the real-time performance. This system can be applied into wide fields.

  17. Intelligent video surveillance systems and technology

    CERN Document Server

    Ma, Yunqian

    2009-01-01

    From the streets of London to subway stations in New York City, hundreds of thousands of surveillance cameras ubiquitously collect hundreds of thousands of videos, often running 24/7. How can such vast volumes of video data be stored, analyzed, indexed, and searched? How can advanced video analysis and systems autonomously recognize people and detect targeted activities real-time? Collating and presenting the latest information Intelligent Video Surveillance: Systems and Technology explores these issues, from fundamentals principle to algorithmic design and system implementation.An Integrated

  18. 78 FR 11988 - Open Video Systems

    Science.gov (United States)

    2013-02-21

    ... From the Federal Register Online via the Government Publishing Office FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 76 Open Video Systems AGENCY: Federal Communications Commission. ACTION: Final rule... Open Video Systems. DATES: The amendments to 47 CFR 76.1505(d) and 76.1506(d), (l)(3), and (m)(2...

  19. Video Game Training and the Reward System

    Directory of Open Access Journals (Sweden)

    Robert C. Lorenz

    2015-02-01

    Full Text Available Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual towards playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training.Fifty healthy participants were randomly assigned to a video game training (TG or control group (CG. Before and after training/control period, functional magnetic resonance imaging (fMRI was conducted using a non-video game related reward task.At pretest, both groups showed strongest activation in ventral striatum (VS during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated.This longitudinal study revealed that video game training may preserve reward responsiveness in the ventral striatum in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.

  20. Video game training and the reward system.

    Science.gov (United States)

    Lorenz, Robert C; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.

  1. A system for endobronchial video analysis

    Science.gov (United States)

    Byrnes, Patrick D.; Higgins, William E.

    2017-03-01

    Image-guided bronchoscopy is a critical component in the treatment of lung cancer and other pulmonary disorders. During bronchoscopy, a high-resolution endobronchial video stream facilitates guidance through the lungs and allows for visual inspection of a patient's airway mucosal surfaces. Despite the detailed information it contains, little effort has been made to incorporate recorded video into the clinical workflow. Follow-up procedures often required in cancer assessment or asthma treatment could significantly benefit from effectively parsed and summarized video. Tracking diagnostic regions of interest (ROIs) could potentially better equip physicians to detect early airway-wall cancer or improve asthma treatments, such as bronchial thermoplasty. To address this need, we have developed a system for the postoperative analysis of recorded endobronchial video. The system first parses an input video stream into endoscopic shots, derives motion information, and selects salient representative key frames. Next, a semi-automatic method for CT-video registration creates data linkages between a CT-derived airway-tree model and the input video. These data linkages then enable the construction of a CT-video chest model comprised of a bronchoscopy path history (BPH) - defining all airway locations visited during a procedure - and texture-mapping information for rendering registered video frames onto the airwaytree model. A suite of analysis tools is included to visualize and manipulate the extracted data. Video browsing and retrieval is facilitated through a video table of contents (TOC) and a search query interface. The system provides a variety of operational modes and additional functionality, including the ability to define regions of interest. We demonstrate the potential of our system using two human case study examples.

  2. Network video transmission system based on SOPC

    Science.gov (United States)

    Zhang, Zhengbing; Deng, Huiping; Xia, Zhenhua

    2008-03-01

    Video systems have been widely used in many fields such as conferences, public security, military affairs and medical treatment. With the rapid development of FPGA, SOPC has been paid great attentions in the area of image and video processing in recent years. A network video transmission system based on SOPC is proposed in this paper for the purpose of video acquisition, video encoding and network transmission. The hardware platform utilized to design the system is an SOPC board of model Altera's DE2, which includes an FPGA chip of model EP2C35F672C6, an Ethernet controller and a video I/O interface. An IP core, known as Nios II embedded processor, is used as the CPU of the system. In addition, a hardware module for format conversion of video data, and another module to realize Motion-JPEG have been designed with Verilog HDL. These two modules are attached to the Nios II processor as peripheral equipments through the Avalon bus. Simulation results show that these two modules work as expected. Uclinux including TCP/IP protocol as well as the driver of Ethernet controller is chosen as the embedded operating system and an application program scheme is proposed.

  3. System of video observation for electron beam welding process

    Science.gov (United States)

    Laptenok, V. D.; Seregin, Y. N.; Bocharov, A. N.; Murygin, A. V.; Tynchenko, V. S.

    2016-04-01

    Equipment of video observation system for electron beam welding process was developed. Construction of video observation system allows to reduce negative effects on video camera during the process of electron beam welding and get qualitative images of this process.

  4. Video signals integrator (VSI) system architecture

    Science.gov (United States)

    Kasprowicz, Grzegorz; Pastuszak, Grzegorz; Poźniak, Krzysztof; Trochimiuk, Maciej; Abramowski, Andrzej; Gaska, Michal; Bukowiecka, Danuta; Tyburska, Agata; Struniawski, Jarosław; Jastrzebski, Pawel; Jewartowski, Blazej; Frasunek, Przemysław; Nalbach-Moszynska, Małgorzata; Brawata, Sebastian; Bubak, Iwona; Gloza, Małgorzata

    2016-09-01

    The purpose of the project is development of a platform which integrates video signals from many sources. The signals can be sourced by existing analogue CCTV surveillance installations, recent internet-protocol (IP) cameras or single cameras of any type. The system will consist of portable devices that provide conversion, encoding, transmission and archiving. The sharing subsystem will use distributed file system and also user console which provides simultaneous access to any of video streams in real time. The system is fully modular so its extension is possible, both from hardware and software side. Due to standard modular technology used, partial technology modernization is also possible during a long exploitation period.

  5. A Neuromorphic System for Video Object Recognition

    Directory of Open Access Journals (Sweden)

    Deepak eKhosla

    2014-11-01

    Full Text Available Automated video object recognition is a topic of emerging importance in both defense and civilian applications. This work describes an accurate and low-power neuromorphic architecture and system for real-time automated video object recognition. Our system, Neuormorphic Visual Understanding of Scenes (NEOVUS, is inspired by recent findings in computational neuroscience on feed-forward object detection and classification pipelines for processing and extracting relevant information from visual data. The NEOVUS architecture is inspired by the ventral (what and dorsal (where streams of the mammalian visual pathway and combines retinal processing, form-based and motion-based object detection, and convolutional neural nets based object classification. Our system was evaluated by the Defense Advanced Research Projects Agency (DARPA under the NEOVISION2 program on a variety of urban area video datasets collected from both stationary and moving platforms. The datasets are challenging as they include a large number of targets in cluttered scenes with varying illumination and occlusion conditions. The NEOVUS system was also mapped to commercially available off-the-shelf hardware. The dynamic power requirement for the system that includes a 5.6Mpixel retinal camera processed by object detection and classification algorithms at 30 frames per second was measured at 21.7 Watts (W, for an effective energy consumption of 5.4 nanoJoules (nJ per bit of incoming video. In a systematic evaluation of five different teams by DARPA on three aerial datasets, the NEOVUS demonstrated the best performance with the highest recognition accuracy and at least three orders of magnitude lower energy consumption than two independent state of the art computer vision systems. These unprecedented results show that the NEOVUS has the potential to revolutionize automated video object recognition towards enabling practical low-power and mobile video processing applications.

  6. Internet video-on-demand e-commerce system based on multilevel video metadata

    Science.gov (United States)

    Xia, Jiali; Jin, Jesse S.

    2000-10-01

    Video-On-Demand is a new development on the Internet. In order to manage the rich multimedia information and the large number of users, we present an Internet Video-On-Demand system with some E- Commerce flavors. This paper presents the system architecture and technologies required in the implementation. It provides interactive Video-On-Demand services in which the user has a complete control over the session presentation. It allows the user to select and receive specific video information by retrieving the database. For improving the performance of video information retrieval and management, the video information is represented by hierarchical video metadata in XML format. Video metadatabase stored the video information in this hierarchical structure and allows user to search the video shots at different semantic levels in the database. To browse the searched video, the user not only has full-function VCR capabilities as the traditional Video-On-Demand, but also can browse the video in a hierarchical method to view different shots. In order to perform management of large number of users over the Internet, a membership database designed and managed in an E-Commerce environment, which allows the user to access the video database based on different access levels.

  7. Video integrated measurement system. [Diagnostic display devices

    Energy Technology Data Exchange (ETDEWEB)

    Spector, B.; Eilbert, L.; Finando, S.; Fukuda, F.

    1982-06-01

    A Video Integrated Measurement (VIM) System is described which incorporates the use of various noninvasive diagnostic procedures (moire contourography, electromyography, posturometry, infrared thermography, etc.), used individually or in combination, for the evaluation of neuromusculoskeletal and other disorders and their management with biofeedback and other therapeutic procedures. The system provides for measuring individual diagnostic and therapeutic modes, or multiple modes by split screen superimposition, of real time (actual) images of the patient and idealized (ideal-normal) models on a video monitor, along with analog and digital data, graphics, color, and other transduced symbolic information. It is concluded that this system provides an innovative and efficient method by which the therapist and patient can interact in biofeedback training/learning processes and holds considerable promise for more effective measurement and treatment of a wide variety of physical and behavioral disorders.

  8. Video distribution system cost model

    Science.gov (United States)

    Gershkoff, I.; Haspert, J. K.; Morgenstern, B.

    1980-01-01

    A cost model that can be used to systematically identify the costs of procuring and operating satellite linked communications systems is described. The user defines a network configuration by specifying the location of each participating site, the interconnection requirements, and the transmission paths available for the uplink (studio to satellite), downlink (satellite to audience), and voice talkback (between audience and studio) segments of the network. The model uses this information to calculate the least expensive signal distribution path for each participating site. Cost estimates are broken downy by capital, installation, lease, operations and maintenance. The design of the model permits flexibility in specifying network and cost structure.

  9. Cobra: A Content-Based Video Retrieval System

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, Willem

    An increasing number of large publicly available video libraries results in a demand for techniques that can manipulate the video data based on content. In this paper, we present a content-based video retrieval system called Cobra. The system supports automatic extraction and retrieval of high-level

  10. Interactive Videos Enhance Learning about Socio-Ecological Systems

    Science.gov (United States)

    Smithwick, Erica; Baxter, Emily; Kim, Kyung; Edel-Malizia, Stephanie; Rocco, Stevie; Blackstock, Dean

    2018-01-01

    Two forms of interactive video were assessed in an online course focused on conservation. The hypothesis was that interactive video enhances student perceptions about learning and improves mental models of social-ecological systems. Results showed that students reported greater learning and attitudes toward the subject following interactive video.…

  11. Virtual Video Prototyping for Healthcare Systems

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Bossen, Claus; Lykke-Olesen, Andreas

    2002-01-01

    Virtual studio technology enables the mixing of physical and digital 3D objects and thus expands the way of representing design ideas in terms of virtual video prototypes, which offers new possibilities for designers by combining elements of prototypes, mock-ups, scenarios, and conventional video....... In this article we report our initial experience in the domain of pervasive healthcare with producing virtual video prototypes and using them in a design workshop. Our experience has been predominantly favourable. The production of a virtual video prototype forces the designers to decide very concrete design...

  12. Virtual video prototyping of pervasive healthcare systems

    DEFF Research Database (Denmark)

    Bardram, Jakob; Bossen, Claus; Lykke-Olesen, Andreas

    2002-01-01

    Virtual studio technology enables the mixing of physical and digital 3D objects and thus expands the way of representing design ideas in terms of virtual video prototypes, which offers new possibilities for designers by combining elements of prototypes, mock-ups, scenarios, and conventional video....... In this article we report our initial experience in the domain of pervasive healthcare with producing virtual video prototypes and using them in a design workshop. Our experience has been predominantly favourable. The production of a virtual video prototype forces the designers to decide very concrete design...

  13. Improving Peer-to-Peer Video Systems

    NARCIS (Netherlands)

    Petrocco, R.P.

    2016-01-01

    Video Streaming is nowadays the Internet’s biggest source of consumer traffic. Traditional content providers rely on centralised client-server model for distributing their video streaming content. The current generation is moving from being passive viewers, or content consumers, to active content

  14. Obtaining video descriptors for a content-based video information system

    Science.gov (United States)

    Bescos, Jesus; Martinez, Jose M.; Cabrera, Julian M.; Cisneros, Guillermo

    1998-09-01

    This paper describes the first stages of a research project that is currently being developed in the Image Processing Group of the UPM. The aim of this effort is to add video capabilities to the Storage and Retrieval Information System already working at our premises. Here we will focus on the early design steps of a Video Information System. For this purpose, we present a review of most of the reported techniques for video temporal segmentation and semantic segmentation, previous steps to afford the content extraction task, and we discuss them to select the more suitable ones. We then outline a block design of a temporal segmentation module, and present guidelines to the design of the semantic segmentation one. All these operations trend to facilitate automation in the extraction of low level features and semantic features that will finally take part of the video descriptors.

  15. Design of video interface conversion system based on FPGA

    Science.gov (United States)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  16. Simultaneous recordings of human microsaccades and drifts with a contemporary video eye tracker and the search coil technique.

    Directory of Open Access Journals (Sweden)

    Michael B McCamy

    Full Text Available Human eyes move continuously, even during visual fixation. These "fixational eye movements" (FEMs include microsaccades, intersaccadic drift and oculomotor tremor. Research in human FEMs has grown considerably in the last decade, facilitated by the manufacture of noninvasive, high-resolution/speed video-oculography eye trackers. Due to the small magnitude of FEMs, obtaining reliable data can be challenging, however, and depends critically on the sensitivity and precision of the eye tracking system. Yet, no study has conducted an in-depth comparison of human FEM recordings obtained with the search coil (considered the gold standard for measuring microsaccades and drift and with contemporary, state-of-the art video trackers. Here we measured human microsaccades and drift simultaneously with the search coil and a popular state-of-the-art video tracker. We found that 95% of microsaccades detected with the search coil were also detected with the video tracker, and 95% of microsaccades detected with video tracking were also detected with the search coil, indicating substantial agreement between the two systems. Peak/mean velocities and main sequence slopes of microsaccades detected with video tracking were significantly higher than those of the same microsaccades detected with the search coil, however. Ocular drift was significantly correlated between the two systems, but drift speeds were higher with video tracking than with the search coil. Overall, our combined results suggest that contemporary video tracking now approaches the search coil for measuring FEMs.

  17. FPGA Implementation of Video Transmission System Based on LTE

    Directory of Open Access Journals (Sweden)

    Lu Yan

    2015-01-01

    Full Text Available In order to support high-definition video transmission, an implementation of video transmission system based on Long Term Evolution is designed. This system is developed on Xilinx Virtex-6 FPGA ML605 Evaluation Board. The paper elaborates the features of baseband link designed in Xilinx ISE and protocol stack designed in Xilinx SDK, and introduces the process of setting up hardware and software platform in Xilinx XPS. According to test, this system consumes less hardware resource and is able to transmit bidirectional video clearly and stably.

  18. Hardware implementation of machine vision systems: image and video processing

    Science.gov (United States)

    Botella, Guillermo; García, Carlos; Meyer-Bäse, Uwe

    2013-12-01

    This contribution focuses on different topics covered by the special issue titled `Hardware Implementation of Machine vision Systems' including FPGAs, GPUS, embedded systems, multicore implementations for image analysis such as edge detection, segmentation, pattern recognition and object recognition/interpretation, image enhancement/restoration, image/video compression, image similarity and retrieval, satellite image processing, medical image processing, motion estimation, neuromorphic and bioinspired vision systems, video processing, image formation and physics based vision, 3D processing/coding, scene understanding, and multimedia.

  19. Compression of mixed video and graphics images for TV systems

    Science.gov (United States)

    van der Schaar-Mitrea, Mihaela; de With, Peter H. N.

    1998-01-01

    The diversity in TV images has augmented with the increased application of computer graphics. In this paper we study z coding system that supports both the lossless coding of such graphics data and regular lossy video compression. The lossless coding techniques are based on runlength and arithmetical coding. For video compression, we introduce a simple block predictive coding technique featuring individual pixel access, so that it enables a gradual shift from lossless coding of graphics to the lossy coding of video. An overall bit rate control completes the system. Computer simulations show a very high quality with a compression factor between 2-3.

  20. Effect Through Broadcasting System Access Point For Video Transmission

    Directory of Open Access Journals (Sweden)

    Leni Marlina

    2015-08-01

    Full Text Available Most universities are already implementing wired and wireless network that is used to access integrated information systems and the Internet. At present it is important to do research on the influence of the broadcasting system through the access point for video transmitter learning in the university area. At every university computer network through the access point must also use the cable in its implementation. These networks require cables that will connect and transmit data from one computer to another computer. While wireless networks of computers connected through radio waves. This research will be a test or assessment of how the influence of the network using the WLAN access point for video broadcasting means learning from the server to the client. Instructional video broadcasting from the server to the client via the access point will be used for video broadcasting means of learning. This study aims to understand how to build a wireless network by using an access point. It also builds a computer server as instructional videos supporting software that can be used for video server that will be emitted by broadcasting via the access point and establish a system of transmitting video from the server to the client via the access point.

  1. A Modified Multiview Video Streaming System Using 3-Tier Architecture

    Directory of Open Access Journals (Sweden)

    Mohamed M. Fouad

    2016-01-01

    Full Text Available In this paper, we present a modified inter-view prediction Multiview Video Coding (MVC scheme from the perspective of viewer's interactivity. When a viewer requests some view(s, our scheme leads to lower transmission bit-rate. We develop an interactive multiview video streaming system exploiting that modified MVC scheme. Conventional interactive multiview video systems require high bandwidth due to redundant data being transferred. With real data test sequences, clear improvements are shown using the proposed interactive multiview video system compared to competing ones in terms of the average transmission bit-rate and storage size of the decoded (i.e., transferred data with comparable rate-distortion.

  2. 47 CFR 76.1503 - Carriage of video programming providers on open video systems.

    Science.gov (United States)

    2010-10-01

    ... shall be “Open Video System Notice of Intent” and “Attention: Media Bureau.” This wording shall be... Notice of Intent with the Office of the Secretary and the Bureau Chief, Media Bureau. The Notice of... capacity through a fair, open and non-discriminatory process; the process must be insulated from any bias...

  3. Video Conference System that Keeps Mutual Eye Contact Among Participants

    Directory of Open Access Journals (Sweden)

    Masahiko Yahagi

    2011-10-01

    Full Text Available A novel video conference system is developed. Suppose that three people A, B, and C attend the video conference, the proposed system enables eye contact among every pair. Furthermore, when B and C chat, A feels as if B and C were facing each other (eye contact seems to be kept among B and C. In the case of a triangle video conference, the respective video system is composed of a half mirror, two video cameras, and two monitors. Each participant watches other participants' images that are reflected by the half mirror. Cameras are set behind the half mirror. Since participants' image (face and the camera position are adjusted to be the same direction, eye contact is kept and conversation becomes very natural compared with conventional video conference systems where participants' eyes do not point to the other participant. When 3 participants sit at the vertex of an equilateral triangle, eyes can be kept even for the situation mentioned above (eye contact between B and C from the aspect of A. Eye contact can be kept not only for 2 or 3 participants but also any number of participants as far as they sit at the vertex of a regular polygon.

  4. Customizing Multiprocessor Implementation of an Automated Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Morteza Biglari-Abhari

    2006-09-01

    Full Text Available This paper reports on the development of an automated embedded video surveillance system using two customized embedded RISC processors. The application is partitioned into object tracking and video stream encoding subsystems. The real-time object tracker is able to detect and track moving objects by video images of scenes taken by stationary cameras. It is based on the block-matching algorithm. The video stream encoding involves the optimization of an international telecommunications union (ITU-T H.263 baseline video encoder for quarter common intermediate format (QCIF and common intermediate format (CIF resolution images. The two subsystems running on two processor cores were integrated and a simple protocol was added to realize the automated video surveillance system. The experimental results show that the system is capable of detecting, tracking, and encoding QCIF and CIF resolution images with object movements in them in real-time. With low cycle-count, low-transistor count, and low-power consumption requirements, the system is ideal for deployment in remote locations.

  5. Study of Repair Protocols for Live Video Streaming Distributed Systems

    OpenAIRE

    Giroire, Frédéric; Huin, Nicolas

    2015-01-01

    International audience; —We study distributed systems for live video streaming. These systems can be of two types: structured and un-structured. In an unstructured system, the diffusion is done opportunistically. The advantage is that it handles churn, that is the arrival and departure of users, which is very high in live streaming systems, in a smooth way. On the opposite, in a structured system, the diffusion of the video is done using explicit diffusion trees. The advantage is that the dif...

  6. Video-based Mobile Mapping System Using Smartphones

    Science.gov (United States)

    Al-Hamad, A.; Moussa, A.; El-Sheimy, N.

    2014-11-01

    The last two decades have witnessed a huge growth in the demand for geo-spatial data. This demand has encouraged researchers around the world to develop new algorithms and design new mapping systems in order to obtain reliable sources for geo-spatial data. Mobile Mapping Systems (MMS) are one of the main sources for mapping and Geographic Information Systems (GIS) data. MMS integrate various remote sensing sensors, such as cameras and LiDAR, along with navigation sensors to provide the 3D coordinates of points of interest from moving platform (e.g. cars, air planes, etc.). Although MMS can provide accurate mapping solution for different GIS applications, the cost of these systems is not affordable for many users and only large scale companies and institutions can benefits from MMS systems. The main objective of this paper is to propose a new low cost MMS with reasonable accuracy using the available sensors in smartphones and its video camera. Using the smartphone video camera, instead of capturing individual images, makes the system easier to be used by non-professional users since the system will automatically extract the highly overlapping frames out of the video without the user intervention. Results of the proposed system are presented which demonstrate the effect of the number of the used images in mapping solution. In addition, the accuracy of the mapping results obtained from capturing a video is compared to the same results obtained from using separate captured images instead of video.

  7. Non Audio-Video gesture recognition system

    DEFF Research Database (Denmark)

    Craciunescu, Razvan; Mihovska, Albena Dimitrova; Kyriazakos, Sofoklis

    2016-01-01

    Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current research focus includes on the emotion...... recognition from the face and hand gesture recognition. Gesture recognition enables humans to communicate with the machine and interact naturally without any mechanical devices. This paper investigates the possibility to use non-audio/video sensors in order to design a low-cost gesture recognition device...

  8. Real-time video exploitation system for small UAVs

    Science.gov (United States)

    Su, Ang; Zhang, Yueqiang; Dong, Jing; Xu, Yuhua; Zhu, Xianwei; Zhang, Xiaohu

    2013-08-01

    The high portability of small Unmanned Aircraft Vehicles (UAVs) makes them play an important role in surveillance and reconnaissance tasks, so the military and civilian desires for UAVs are constantly growing. Recently, we have developed a real-time video exploitation system for our small UAV which is mainly used in forest patrol tasks. Our system consists of six key models, including image contrast enhancement, video stabilization, mosaicing, salient target indication, moving target indication, and display of the footprint and flight path on map. Extensive testing on the system has been implemented and the result shows our system performed well.

  9. Spatial-spectral analysis of a laser scanning video system

    Science.gov (United States)

    Kapustin, A. A.; Razumovskii, V. N.; Iatsevich, G. B.

    1980-06-01

    A spatial-spectral analysis method is considered for a laser scanning video system with the phase processing of a received signal, on a modulation frequency. Distortions caused by the system are analyzed, and a general problem is reduced for the case of a cylindrical surface. The approach suggested can also be used for scanning microwave systems.

  10. Evaluation of Binocular Vision Therapy Efficacy by 3D Video-Oculography Measurement of Binocular Alignment and Motility.

    Science.gov (United States)

    Laria, Carlos; Pinero, David P

    2013-01-01

    To evaluate two cases of intermittent exotropia treated by vision therapy the efficacy of the treatment by complementing the clinical examination with a 3D videooculography to register and to evidence the potential applicability of this technology for such purpose. We report the binocular alignment changes occurring after vision therapy in a woman of 36 years with an intermittent exotropia of 25 prism diopters at far and 18 PD at near and a child of 10 years with 8 PD of intermittent exotropia in primary position associated to 6 PD of left eye hypotropia. Both patients presented good visual acuity with correction in both eyes. Instability of ocular deviation was evident by VOG analysis, revealing also the presence of vertical and torsional components. Binocular vision therapy was prescribed and performed including different types of vergence, accommodation, and consciousness of diplopia training. After therapy, excellent ranges of fusional vergence and a to-the-nose near point of convergence were obtained.The 3D VOG examination confirmed the compensation of the deviation with a high level of stability of binocular alignment. Significant improvement could be observed after therapy in the vertical and torsional components that were found to become more stable. Patients were very satisfied with the outcome obtained by vision therapy. 3D-VOG is a useful technique for providing an objective register of the compensation of the ocular deviation and the stability of the binocular alignment achieved after vision therapy in cases of intermittent exotropia, providing a detailed analysis of vertical and torsional improvements.

  11. 47 CFR 76.1504 - Rates, terms and conditions for carriage on open video systems.

    Science.gov (United States)

    2010-10-01

    ... system operator may charge different rates to different classes of video programming providers, provided... open video systems. 76.1504 Section 76.1504 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76...

  12. Practical system for generating digital mixed reality video holograms.

    Science.gov (United States)

    Song, Joongseok; Kim, Changseob; Park, Hanhoon; Park, Jong-Il

    2016-07-10

    We propose a practical system that can effectively mix the depth data of real and virtual objects by using a Z buffer and can quickly generate digital mixed reality video holograms by using multiple graphic processing units (GPUs). In an experiment, we verify that real objects and virtual objects can be merged naturally in free viewing angles, and the occlusion problem is well handled. Furthermore, we demonstrate that the proposed system can generate mixed reality video holograms at 7.6 frames per second. Finally, the system performance is objectively verified by users' subjective evaluations.

  13. The development of augmented video system on postcards

    Science.gov (United States)

    Chen, Chien-Hsu; Chou, Yin-Ju

    2013-03-01

    This study focuses on development of augmented video system on traditional picture postcards. The system will provide users to print out the augmented reality marker on the sticker to stick on the picture postcard, and it also allows users to record their real time image and video to augment on that stick marker. According dynamic image, users can share travel moods, greeting, and travel experience to their friends. Without changing in the traditional picture postcards, we develop augmented video system on them by augmented reality (AR) technology. It not only keeps the functions of traditional picture postcards, but also enhances user's experience to keep the user's memories and emotional expression by augmented digital media information on them.

  14. Layered Video Transmission on Adaptive OFDM Wireless Systems

    Directory of Open Access Journals (Sweden)

    D. Dardari

    2004-09-01

    Full Text Available Future wireless video transmission systems will consider orthogonal frequency division multiplexing (OFDM as the basic modulation technique due to its robustness and low complexity implementation in the presence of frequency-selective channels. Recently, adaptive bit loading techniques have been applied to OFDM showing good performance gains in cable transmission systems. In this paper a multilayer bit loading technique, based on the so called “ordered subcarrier selection algorithm,” is proposed and applied to a Hiperlan2-like wireless system at 5 GHz for efficient layered multimedia transmission. Different schemes realizing unequal error protection both at coding and modulation levels are compared. The strong impact of this technique in terms of video quality is evaluated for MPEG-4 video transmission.

  15. Risk analysis of a video-surveillance system

    NARCIS (Netherlands)

    Rothkrantz, L.; Lefter, I.

    2011-01-01

    The paper describes a surveillance system of cameras installed at lamppost of a military area. The surveillance system has been designed to detect unwanted visitors or suspicious behaviors. The area is composed of streets, building blocks and surrounded by gates and water. The video recordings are

  16. Onboard Systems Record Unique Videos of Space Missions

    Science.gov (United States)

    2010-01-01

    Ecliptic Enterprises Corporation, headquartered in Pasadena, California, provided onboard video systems for rocket and space shuttle launches before it was tasked by Ames Research Center to craft the Data Handling Unit that would control sensor instruments onboard the Lunar Crater Observation and Sensing Satellite (LCROSS) spacecraft. The technological capabilities the company acquired on this project, as well as those gained developing a high-speed video system for monitoring the parachute deployments for the Orion Pad Abort Test Program at Dryden Flight Research Center, have enabled the company to offer high-speed and high-definition video for geosynchronous satellites and commercial space missions, providing remarkable footage that both informs engineers and inspires the imagination of the general public.

  17. Design of wireless video transmission system based on STM32 microcontroller

    Science.gov (United States)

    Yang, Fan; Ma, Chunting; Li, Haoyi

    2017-03-01

    The design of a wireless video transmission system based on STM32, the system uses the STM32F103VET6 microprocessor as the core, through the video acquisition module collects video data, video data will be sent to the receiver through the wireless transmitting module, receiving data will be displayed on the LCD screen. The software design process of receiver and transmitter is introduced. The experiment proves that the system realizes wireless video transmission function.

  18. The accident site portable integrated video system

    Science.gov (United States)

    Jones, D. P.; Shirey, D. L.; Amai, W. A.

    1995-01-01

    This paper presents a high bandwidth fiber-optic communication system intended for post accident recovery of weapons. The system provides bi-directional multichannel, and multi-media communications. Two smaller systems that were developed as direct spin-offs of the larger system are also briefly discussed.

  19. The accident site portable integrated video system

    Energy Technology Data Exchange (ETDEWEB)

    Jones, D.P.; Shirey, D.L.; Amai, W.A.

    1995-12-31

    This paper presents a high bandwidth fiber-optic communication system intended for post accident recovery of weapons. The system provides bi-directional multichannel, and multi-media communications. Two smaller systems that were developed as direct spin-offs of the larger system are also briefly discussed.

  20. 76 FR 45859 - In the Matter of Certain Video Analytics Software, Systems, Components Thereof, and Products...

    Science.gov (United States)

    2011-08-01

    ... COMMISSION In the Matter of Certain Video Analytics Software, Systems, Components Thereof, and Products... analytics software, systems, components thereof, and products containing same by reason of infringement of... after importation of certain video analytics software, systems, components thereof, and products...

  1. 77 FR 45376 - Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same...

    Science.gov (United States)

    2012-07-31

    ... COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... analytics software, systems, components thereof, and products containing same by reason of infringement of... after importation of certain video analytics software, systems, components thereof, and products...

  2. 77 FR 39509 - Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same...

    Science.gov (United States)

    2012-07-03

    ... COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... Trade Commission has received a complaint entitled Certain Video Analytics Software, Systems, Components... analytics software, systems, components thereof, and products containing same. The complaint names as...

  3. Instructional Videos for Supporting Older Adults Who Use Interactive Systems

    Science.gov (United States)

    Gramss, Denise; Struve, Doreen

    2009-01-01

    The study reported in this paper investigated the usefulness of different instructions for guiding inexperienced older adults through interactive systems. It was designed to compare different media in relation to their social as well as their motivational impact on the elderly during the learning process. Precisely, the video was compared with…

  4. The measuring video section of the Fragment multispectral scanning system

    Science.gov (United States)

    Glazkov, V. D.; Goretov, Iu. M.; Rozhavskii, E. I.; Shcherbakov, V. V.

    The self-correcting video section of the satellite-borne Fragment multispectral scanning system is described. This section scheme makes possible a sufficiently efficient equalization of the transformation coefficients of all the measuring sections in the presence of a reference-radiation source and a single reference time interval for all the sections.

  5. A Client-Server System for Ubiquitous Video Service

    Directory of Open Access Journals (Sweden)

    Ronit Nossenson

    2012-12-01

    Full Text Available In this work we introduce a simple client-server system architecture and algorithms for ubiquitous live video and VOD service support. The main features of the system are: efficient usage of network resources, emphasis on user personalization, and ease of implementation. The system supports many continuous service requirements such as QoS provision, user mobility between networks and between different communication devices, and simultaneous usage of a device by a number of users.

  6. A Secure and Robust Object-Based Video Authentication System

    Directory of Open Access Journals (Sweden)

    Dajun He

    2004-10-01

    Full Text Available An object-based video authentication system, which combines watermarking, error correction coding (ECC, and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI.

  7. Operation quality assessment model for video conference system

    Science.gov (United States)

    Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian

    2018-01-01

    Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.

  8. Indirect ophthalmoscopic stereo video system using three-dimensional LCD

    Science.gov (United States)

    Kong, Hyoun-Joong; Seo, Jong Mo; Hwang, Jeong Min; Kim, Hee Chan

    2009-02-01

    Binocular indirect ophthalmoscope (BIO) provides a wider view of fundus with stereopsis contrary to the direct one. Proposed system is composed of portable BIO and 3D viewing unit. The illumination unit of BIO utilized high flux LED as a light source, LED condensing lens cap for beam focusing, color filters and small lithium ion battery. In optics unit of BIO, beam splitter was used to distribute an examinee's fundus image both to examiner's eye and to CMOS camera module attached to device. Captured retinal video stream data from stereo camera modules were sent to PC through USB 2.0 connectivity. For 3D viewing, two video streams having parallax between them were aligned vertically and horizontally and made into side-by-side video stream for cross-eyed stereoscopy. And the data were converted into autostereoscopic video stream using vertical interlacing for stereoscopic LCD which has glass 3D filter attached to the front side of it. Our newly devised system presented the real-time 3-D view of fundus to assistants with less dizziness than cross-eyed stereoscopy. And the BIO showed good performance compared to conventional portable BIO (Spectra Plus, Keeler Limited, Windsor, UK).

  9. A Web-Based Video Digitizing System for the Study of Projectile Motion.

    Science.gov (United States)

    Chow, John W.; Carlton, Les G.; Ekkekakis, Panteleimon; Hay, James G.

    2000-01-01

    Discusses advantages of a video-based, digitized image system for the study and analysis of projectile motion in the physics laboratory. Describes the implementation of a web-based digitized video system. (WRM)

  10. 77 FR 75659 - Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same...

    Science.gov (United States)

    2012-12-21

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... the United States after importation of certain video analytics software systems, components thereof...

  11. 77 FR 1726 - Investigations: Terminations, Modifications and Rulings: Certain Video Game Systems and Controllers

    Science.gov (United States)

    2012-01-11

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Investigations: Terminations, Modifications and Rulings: Certain Video Game Systems and... United States after importation of certain video game systems and controllers by reason of infringement...

  12. Interactive video audio system: communication server for INDECT portal

    Science.gov (United States)

    Mikulec, Martin; Voznak, Miroslav; Safarik, Jakub; Partila, Pavol; Rozhon, Jan; Mehic, Miralem

    2014-05-01

    The paper deals with presentation of the IVAS system within the 7FP EU INDECT project. The INDECT project aims at developing the tools for enhancing the security of citizens and protecting the confidentiality of recorded and stored information. It is a part of the Seventh Framework Programme of European Union. We participate in INDECT portal and the Interactive Video Audio System (IVAS). This IVAS system provides a communication gateway between police officers working in dispatching centre and police officers in terrain. The officers in dispatching centre have capabilities to obtain information about all online police officers in terrain, they can command officers in terrain via text messages, voice or video calls and they are able to manage multimedia files from CCTV cameras or other sources, which can be interesting for officers in terrain. The police officers in terrain are equipped by smartphones or tablets. Besides common communication, they can reach pictures or videos sent by commander in office and they can respond to the command via text or multimedia messages taken by their devices. Our IVAS system is unique because we are developing it according to the special requirements from the Police of the Czech Republic. The IVAS communication system is designed to use modern Voice over Internet Protocol (VoIP) services. The whole solution is based on open source software including linux and android operating systems. The technical details of our solution are presented in the paper.

  13. Web-based remote video monitoring system implemented using Java technology

    Science.gov (United States)

    Li, Xiaoming

    2012-04-01

    A HTTP based video transmission system has been built upon the p2p(peer to peer) network structure utilizing the Java technologies. This makes the video monitoring available to any host which has been connected to the World Wide Web in any method, including those hosts behind firewalls or in isolated sub-networking. In order to achieve this, a video source peer has been developed, together with the client video playback peer. The video source peer can respond to the video stream request in HTTP protocol. HTTP based pipe communication model is developed to speeding the transmission of video stream data, which has been encoded into fragments using the JPEG codec. To make the system feasible in conveying video streams between arbitrary peers on the web, a HTTP protocol based relay peer is implemented as well. This video monitoring system has been applied in a tele-robotic system as a visual feedback to the operator.

  14. A portable wireless power transmission system for video capsule endoscopes.

    Science.gov (United States)

    Shi, Yu; Yan, Guozheng; Zhu, Bingquan; Liu, Gang

    2015-01-01

    Wireless power transmission (WPT) technology can solve the energy shortage problem of the video capsule endoscope (VCE) powered by button batteries, but the fixed platform limited its clinical application. This paper presents a portable WPT system for VCE. Besides portability, power transfer efficiency and stability are considered as the main indexes of optimization design of the system, which consists of the transmitting coil structure, portable control box, operating frequency, magnetic core and winding of receiving coil. Upon the above principles, the correlation parameters are measured, compared and chosen. Finally, through experiments on the platform, the methods are tested and evaluated. In the gastrointestinal tract of small pig, the VCE is supplied with sufficient energy by the WPT system, and the energy conversion efficiency is 2.8%. The video obtained is clear with a resolution of 320×240 and a frame rate of 30 frames per second. The experiments verify the feasibility of design scheme, and further improvement direction is discussed.

  15. 75 FR 68379 - In the Matter of: Certain Video Game Systems and Controllers; Notice of Investigation

    Science.gov (United States)

    2010-11-05

    ... COMMISSION In the Matter of: Certain Video Game Systems and Controllers; Notice of Investigation AGENCY: U.S... importation, and the sale within the United States after importation of certain video game systems and... after importation of certain video game systems and controllers that infringe one or more of claims 16...

  16. Measurement and protocol for evaluating video and still stabilization systems

    Science.gov (United States)

    Cormier, Etienne; Cao, Frédéric; Guichard, Frédéric; Viard, Clément

    2013-01-01

    This article presents a system and a protocol to characterize image stabilization systems both for still images and videos. It uses a six axes platform, three being used for camera rotation and three for camera positioning. The platform is programmable and can reproduce complex motions that have been typically recorded by a gyroscope mounted on different types of cameras in different use cases. The measurement uses a single chart for still image and videos, the texture dead leaves chart. Although the proposed implementation of the protocol uses a motion platform, the measurement itself does not rely on any specific hardware. For still images, a modulation transfer function is measured in different directions and is weighted by a contrast sensitivity function (simulating the human visual system accuracy) to obtain an acutance. The sharpness improvement due to the image stabilization system is a good measurement of performance as recommended by a CIPA standard draft. For video, four markers on the chart are detected with sub-pixel accuracy to determine a homographic deformation between the current frame and a reference position. This model describes well the apparent global motion as translations, but also rotations along the optical axis and distortion due to the electronic rolling shutter equipping most CMOS sensors. The protocol is applied to all types of cameras such as DSC, DSLR and smartphones.

  17. The Design of Embedded Video Supervision System of Vegetable Shed

    Science.gov (United States)

    Sun, Jun; Liang, Mingxing; Chen, Weijun; Zhang, Bin

    In order to reinforce the measure of vegetable shed's safety, the S3C44B0X is taken as the main processor chip. The embedded hardware platform is built with a few outer-ring chips, and the network server is structured under the Linux embedded environment, and MPEG4 compression and real time transmission are carried on. The experiment indicates that the video monitoring system can guarantee good effect, which can be applied to the safety of vegetable sheds.

  18. A Multi-view Approach for Detecting Non-Cooperative Users in Online Video Sharing Systems

    OpenAIRE

    Langbehn, Hendrickson Reiter; Ricci, Saulo M. R.; Gonçalves, Marcos A.; Almeida, Jussara Marques; Pappa, Gisele Lobo; Benevenuto, Fabrício

    2010-01-01

    Most online video sharing systems (OVSSs), such as YouTube and Yahoo! Video, have several mechanisms for supporting interactions among users. One such mechanism is the  video response feature in YouTube, which allows a user to post a video in response to another video. While increasingly popular, the video response feature opens the opportunity for non-cooperative users to  introduce  ``content pollution'' into the system, thus causing loss of service effectiveness and credibility as w...

  19. Automation of systems enabling search on stored video data

    Science.gov (United States)

    Hanjalic, Alan; Ceccarelli, Marco; Lagendijk, Reginald L.; Biemond, Jan

    1997-01-01

    In the European project SMASH mass-market storage systems for domestic use are under study. Besides the storage technology that is developed in this project, the related objective of user-friendly browsing/query of video data is studied as well. Key issues in developing a user-friendly system are (1) minimizing the user-intervention in preparatory steps (extraction and storage of representative information needed for browsing/query), (2) providing an acceptable representation of the stored video content in view of a higher automation level, (3) the possibility for performing these steps directly on the incoming stream at storage time, and (4) parameter-robustness of algorithms used for these steps. This paper proposes and validate novel approaches for automation of mentioned preparatory phases. A detection method for abrupt shot changes is proposed, using locally computed threshold based on a statistical model for frame-to-frame differences. For the extraction of representative frames (key frames) an approach is presented which distributes a given number of key frames over the sequence depending on content changes in a temporal segment of the sequence. A multimedia database is introduced, able to automatically store all bibliographic information about a recorded video as well as a visual representation of the content without any manual intervention from the user.

  20. Design and implementation of a wireless video surveillance system based on ARM

    Science.gov (United States)

    Li, Yucheng; Han, Dantao; Yan, Juanli

    2011-06-01

    A wireless video surveillance system based on ARM was designed and implemented in this article. The newest ARM11 S3C6410 was used as the main monitoring terminal chip with the embedded Linux operating system. The video input was obtained by the analog CCD and transferred from analog to digital by the video chip TVP5150. The video was packed by RTP and transmitted by the wireless USB TL-WN322G+ after being compressed by H.264 encoders in S3C6410. Further more, the video images were preprocessed. It can detect the abnormities of the specified scene and the abnormal alarms. The video transmission definition is the standard definition 480P. The video stream can be real-time monitored. The system has been used in the real-time intelligent video surveillance of the specified scene.

  1. Networked telepresence system using web browsers and omni-directional video streams

    Science.gov (United States)

    Ishikawa, Tomoya; Yamazawa, Kazumasa; Sato, Tomokazu; Ikeda, Sei; Nakamura, Yutaka; Fujikawa, Kazutoshi; Sunahara, Hideki; Yokoya, Naokazu

    2005-03-01

    In this paper, we describe a new telepresence system which enables a user to look around a virtualized real world easily in network environments. The proposed system includes omni-directional video viewers on web browsers and allows the user to look around the omni-directional video contents on the web browsers. The omni-directional video viewer is implemented as an Active-X program so that the user can install the viewer automatically only by opening the web site which contains the omni-directional video contents. The system allows many users at different sites to look around the scene just like an interactive TV using a multi-cast protocol without increasing the network traffic. This paper describes the implemented system and the experiments using live and stored video streams. In the experiment with stored video streams, the system uses an omni-directional multi-camera system for video capturing. We can look around high resolution and high quality video contents. In the experiment with live video streams, a car-mounted omni-directional camera acquires omni-directional video streams surrounding the car, running in an outdoor environment. The acquired video streams are transferred to the remote site through the wireless and wired network using multi-cast protocol. We can see the live video contents freely in arbitrary direction. In the both experiments, we have implemented a view-dependent presentation with a head-mounted display (HMD) and a gyro sensor for realizing more rich presence.

  2. Developing Communication Competence Using an Online Video Reflection System: Pre-Service Teachers' Experiences

    Science.gov (United States)

    Bower, Matt; Cavanagh, Michael; Moloney, Robyn; Dao, MingMing

    2011-01-01

    This paper reports on how the cognitive, behavioural and affective communication competencies of undergraduate students were developed using an online Video Reflection system. Pre-service teachers were provided with communication scenarios and asked to record short videos of one another making presentations. Students then uploaded their videos to…

  3. Video Automatic Target Tracking System (VATTS) Operating Procedure,

    Science.gov (United States)

    1980-08-15

    AO-AIO𔃾 790 BOM CORP MCLEAN VA F/A 17/8 VIDEO AUTOMATIC TARGE T TRACKING SYSTEM (VATTS) OPERATING PROCEO -ETC(U) AUG Go C STAMM J P ORRESTER, J...Tape Transport Number Two TKI Tektronics I/0 Terminal DS1 Removable Disk Storage Unit DSO Fixed Disk Storage Unit CRT Cathode Ray Tube 1-3 THE BDM...file (mark on Mag Tape) AZEL Quick look at Trial Information Program DUPTAPE Allows for duplication of magnetic tapes CA Cancel ( terminates program on

  4. Coastal monitoring through video systems: best practices and architectural design of a new video monitoring station in Jesolo (Veneto, Italy)

    Science.gov (United States)

    Archetti, Renata; Vacchi, Matteo; Carniel, Sandro; Benetazzo, Alvise

    2013-04-01

    Measuring the location of the shoreline and monitoring foreshore changes through time represent a fundamental task for correct coastal management at many sites around the world. Several authors demonstrated video systems to be an essential tool for increasing the amount of data available for coastline management. These systems typically sample at least once per hour and can provide long-term datasets showing variations over days, events, months, seasons and years. In the past few years, due to the wide diffusion of video cameras at relatively low price, the use of video cameras and of video images analysis for environmental control has increased significantly. Even if video monitoring systems were often used in the research field they are most often applied with practical purposes including: i) identification and quantification of shoreline erosion, ii) assessment of coastal protection structure and/or beach nourishment performance, and iii) basic input to engineering design in the coastal zone iv) support for integrated numerical model validation Here we present the guidelines for the creation of a new video monitoring network in the proximity of the Jesolo beach (NW of the Adriatic Sea, Italy), Within this 10 km-long tourist district several engineering structures have been built in recent years, with the aim of solving urgent local erosion problems; as a result, almost all types of protection structures are present at this site: groynes, detached breakwaters.The area investigated experienced severe problems of coastal erosion in the past decades, inclusding a major one in the last November 2012. The activity is planned within the framework of the RITMARE project, that is also including other monitoring and scientific activities (bathymetry survey, waves and currents measurements, hydrodynamics and morphodynamic modeling). This contribution focuses on best practices to be adopted in the creation of the video monitoring system, and briefly describes the

  5. An Evolutionary Video Assignment Optimization Technique for VOD System in Heterogeneous Environment

    Directory of Open Access Journals (Sweden)

    King-Man Ho

    2010-01-01

    Full Text Available We investigate the video assignment problem of a hierarchical Video-on-Demand (VOD system in heterogeneous environments where different quality levels of videos can be encoded using either replication or layering. In such systems, videos are delivered to clients either through a proxy server or video broadcast/unicast channels. The objective of our work is to determine the appropriate coding strategy as well as the suitable delivery mechanism for a specific quality level of a video such that the overall system blocking probability is minimized. In order to find a near-optimal solution for such a complex video assignment problem, an evolutionary approach based on genetic algorithm (GA is proposed. From the results, it is shown that the system performance can be significantly enhanced by efficiently coupling the various techniques.

  6. Transmission of multiplexed video signals in multimode optical fiber systems

    Science.gov (United States)

    White, Preston, III

    1988-01-01

    Kennedy Space Center has the need for economical transmission of two multiplexed video signals along multimode fiberoptic systems. These systems must span unusual distances and must meet RS-250B short-haul standards after reception. Bandwidth is a major problem and studies of the installed fibers, available LEDs and PINFETs led to the choice of 100 MHz as the upper limit for the system bandwidth. Optical multiplexing and digital transmission were deemed inappropriate. Three electrical multiplexing schemes were chosen for further study. Each of the multiplexing schemes included an FM stage to help meet the stringent S/N specification. Both FM and AM frequency division multiplexing methods were investigated theoretically and these results were validated with laboratory tests. The novel application of quadrature amplitude multiplexing was also considered. Frequency division multiplexing of two wideband FM video signal appears the most promising scheme although this application requires high power highly linear LED transmitters. Futher studies are necessary to determine if LEDs of appropriate quality exist and to better quantify performance of QAM in this application.

  7. View Synthesis for Advanced 3D Video Systems

    Directory of Open Access Journals (Sweden)

    Müller Karsten

    2008-01-01

    Full Text Available Abstract Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, high-quality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereo- and multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.

  8. HDR video synthesis for vision systems in dynamic scenes

    Science.gov (United States)

    Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried

    2016-09-01

    High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.

  9. Height Measuring System On Video Using Otsu Method

    Science.gov (United States)

    Sandy, C. L. M.; Meiyanti, R.

    2017-01-01

    A measurement of height is comparing the value of the magnitude of an object with a standard measuring tool. The problems that exist in the measurement are still the use of a simple apparatus in which one of them is by using a meter. This method requires a relatively long time. To overcome these problems, this research aims to create software with image processing that is used for the measurement of height. And subsequent that image is tested, where the object captured by the video camera can be known so that the height of the object can be measured using the learning method of Otsu. The system was built using Delphi 7 of Vision Lab VCL 4.5 component. To increase the quality of work of the system in future research, the developed system can be combined with other methods.

  10. Magnetic anchoring guidance system in video-assisted thoracic surgery.

    Science.gov (United States)

    Giaccone, Agnese; Solli, Piergiorgio; Bertolaccini, Luca

    2017-01-01

    The magnetic anchoring guidance system (MAGS) is one of the most promising technological innovations in minimally invasive surgery and consists in two magnetic elements matched through the abdominal or thoracic wall. The internal magnet can be inserted into the abdominal or chest cavity through a small single incision and then moved into position by manipulating the external component. In addition to a video camera system, the inner magnetic platform can house remotely controlled surgical tools thus reducing instruments fencing, a serious inconvenience of the uniportal access. The latest prototypes are equipped with self-light-emitting diode (LED) illumination and a wireless antenna for signal transmission and device controlling, which allows bypassing the obstacle of wires crossing the field of view (FOV). Despite being originally designed for laparoscopic surgery, the MAGS seems to suit optimally the characteristics of the chest wall and might meet the specific demands of video-assisted thoracic surgery (VATS) surgery in terms of ergonomics, visualization and surgical performance; moreover, it involves less risks for the patients and an improved aesthetic outcome.

  11. An Automatic Multimedia Content Summarization System for Video Recommendation

    Science.gov (United States)

    Yang, Jie Chi; Huang, Yi Ting; Tsai, Chi Cheng; Chung, Ching I.; Wu, Yu Chieh

    2009-01-01

    In recent years, using video as a learning resource has received a lot of attention and has been successfully applied to many learning activities. In comparison with text-based learning, video learning integrates more multimedia resources, which usually motivate learners more than texts. However, one of the major limitations of video learning is…

  12. Three-dimensional video presentation of microsurgery by the cross-eyed viewing method using a high-definition video system.

    Science.gov (United States)

    Terakawa, Yuzo; Ishibashi, Kenichi; Goto, Takeo; Ohata, Kenji

    2011-01-01

    Three-dimensional (3-D) video recording of microsurgery is a more promising tool for presentation and education of microsurgery than conventional two-dimensional video systems, but has not been widely adopted partly because 3-D image processing of previous 3-D video systems is complicated and observers without optical devices cannot visualize the 3-D image. A new technical development for 3-D video presentation of microsurgery is described. Microsurgery is recorded with a microscope equipped with a single high-definition (HD) video camera. This 3-D video system records the right- and left-eye views of the microscope simultaneously as single HD data with the use of a 3-D camera adapter: the right- and left-eye views of the microscope are displayed separately on the right and left sides, respectively. The operation video is then edited with video editing software so that the right-eye view is displayed on the left side and left-eye view is displayed on the right side. Consequently, a 3-D video of microsurgery can be created by viewing the edited video by the cross-eyed stereogram viewing method without optical devices. The 3-D microsurgical video provides a more accurate view, especially with regard to depth, and a better understanding of microsurgical anatomy. Although several issues are yet to be addressed, this 3-D video system is a useful method of recording and presenting microsurgery for 3-D viewing with currently available equipment, without optical devices.

  13. A Miniaturized Video System for Monitoring Drosophila Behavior

    Science.gov (United States)

    Bhattacharya, Sharmila; Inan, Omer; Kovacs, Gregory; Etemadi, Mozziyar; Sanchez, Max; Marcu, Oana

    2011-01-01

    populations in terrestrial experiments, and could be especially useful in field experiments in remote locations. Two practical limitations of the system should be noted: first, only walking flies can be observed - not flying - and second, although it enables population studies, tracking individual flies within the population is not currently possible. The system used video recording and an analog circuit to extract the average light changes as a function of time. Flies were held in a 5-cm diameter Petri dish and illuminated from below by a uniform light source. A miniature, monochrome CMOS (complementary metal-oxide semiconductor) video camera imaged the flies. This camera had automatic gain control, and this did not affect system performance. The camera was positioned 5-7 cm above the Petri dish such that the imaging area was 2.25 sq cm. With this basic setup, still images and continuous video of 15 flies at one time were obtained. To reduce the required data bandwidth by several orders of magnitude, a band-pass filter (0.3-10 Hz) circuit compressed the video signal and extracted changes in image luminance over time. The raw activity signal output of this circuit was recorded on a computer and digitally processed to extract the fly movement "events" from the waveform. These events corresponded to flies entering and leaving the image and were used for extracting activity parameters such as inter-event duration. The efficacy of the system in quantifying locomotor activity was evaluated by varying environmental temperature, then measuring the activity level of the flies.

  14. Research of real-time video processing system based on 6678 multi-core DSP

    Science.gov (United States)

    Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang

    2017-10-01

    In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.

  15. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Directory of Open Access Journals (Sweden)

    Chang Yao-Jen

    2002-01-01

    Full Text Available This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network and the H.324 WAN (wide-area network users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  16. ON USING VIDEO CONFERENCE SYSTEMS IN HIGHER SCHOOL FOR DISTANCE LEARNING

    Directory of Open Access Journals (Sweden)

    Д В Сенашенко

    2015-12-01

    Full Text Available The article describes distant learning systems used in world practice. The author gives classification of video communication systems. Aspects of using Skype software in Russian Federation are discussed. In conclusion the author provides the review of modern production video conference systems used as tools for distant learning.

  17. 77 FR 75188 - Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same...

    Science.gov (United States)

    2012-12-19

    ... COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... certain video analytics software, systems, components thereof, and products containing same by reason of..., Inc. The remaining respondents are Bosch Security Systems, Inc.; Robert Bosch GmbH; Bosch...

  18. 77 FR 808 - Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same...

    Science.gov (United States)

    2012-01-06

    ... COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... States after importation of certain video analytics software, systems, components thereof, and products...; Bosch Security Systems, Inc. of Fairpoint, New York; Samsung Techwin Co., Ltd. of Seoul, Korea; Samsung...

  19. An Indoor Video Surveillance System with Intelligent Fall Detection Capability

    Directory of Open Access Journals (Sweden)

    Ming-Chih Chen

    2013-01-01

    Full Text Available This work presents a novel indoor video surveillance system, capable of detecting the falls of humans. The proposed system can detect and evaluate human posture as well. To evaluate human movements, the background model is developed using the codebook method, and the possible position of moving objects is extracted using the background and shadow eliminations method. Extracting a foreground image produces more noise and damage in this image. Additionally, the noise is eliminated using morphological and size filters and this damaged image is repaired. When the image object of a human is extracted, whether or not the posture has changed is evaluated using the aspect ratio and height of a human body. Meanwhile, the proposed system detects a change of the posture and extracts the histogram of the object projection to represent the appearance. The histogram becomes the input vector of K-Nearest Neighbor (K-NN algorithm and is to evaluate the posture of the object. Capable of accurately detecting different postures of a human, the proposed system increases the fall detection accuracy. Importantly, the proposed method detects the posture using the frame ratio and the displacement of height in an image. Experimental results demonstrate that the proposed system can further improve the system performance and the fall down identification accuracy.

  20. Rapid prototyping of an automated video surveillance system: a hardware-software co-design approach

    Science.gov (United States)

    Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.; Ives, Robert W.

    2011-06-01

    FPGA devices with embedded DSP and memory blocks, and high-speed interfaces are ideal for real-time video processing applications. In this work, a hardware-software co-design approach is proposed to effectively utilize FPGA features for a prototype of an automated video surveillance system. Time-critical steps of the video surveillance algorithm are designed and implemented in the FPGAs logic elements to maximize parallel processing. Other non timecritical tasks are achieved by executing a high level language program on an embedded Nios-II processor. Pre-tested and verified video and interface functions from a standard video framework are utilized to significantly reduce development and verification time. Custom and parallel processing modules are integrated into the video processing chain by Altera's Avalon Streaming video protocol. Other data control interfaces are achieved by connecting hardware controllers to a Nios-II processor using Altera's Avalon Memory Mapped protocol.

  1. [Developing a high quality video transmission system based on mobile network].

    Science.gov (United States)

    Wen, Ming; Hu, Haibo

    2014-01-01

    To meet the demands of high definition of video and transmission at real-time during the surgery of endoscope, this paper designs an HD mobile video transmission system. This system uses H.264/AVC to encode the original video data and transports it in the network by RTP/RTCP protocol. Meanwhile, the system implements a stable video transmission in portable terminals (such as tablet PCs, mobile phones) under the 3G mobile network. The test result verifies the strong repair ability and stability under the conditions of low bandwidth, high packet loss rate, and high delay and shows a high practical value.

  2. Intersegmental eye-head-body interactions during complex whole body movements

    National Research Council Canada - National Science Library

    von Laßberg, Christoph; Beykirch, Karl A; Mohler, Betty J; Bülthoff, Heinrich H

    2014-01-01

    ...; wireless electromyography; and a specialized wireless sport-video-oculography system, which was able to capture and calculate precise oculomotor data under conditions of rapid multiaxial acceleration...

  3. High-speed digital video tracking system for generic applications

    Science.gov (United States)

    Walton, James S.; Hallamasek, Karen G.

    2001-04-01

    The value of high-speed imaging for making subjective assessments is widely recognized, but the inability to acquire useful data from image sequences in a timely fashion has severely limited the use of the technology. 4DVideo has created a foundation for a generic instrument that can capture kinematic data from high-speed images. The new system has been designed to acquire (1) two-dimensional trajectories of points; (2) three-dimensional kinematics of structures or linked rigid-bodies; and (3) morphological reconstructions of boundaries. The system has been designed to work with an unlimited number of cameras configured as nodes in a network, with each camera able to acquire images at 1000 frames per second (fps) or better, with a spatial resolution of 512 X 512 or better, and an 8-bit gray scale. However, less demanding configurations are anticipated. The critical technology is contained in the custom hardware that services the cameras. This hardware optimizes the amount of information stored, and maximizes the available bandwidth. The system identifies targets using an algorithm implemented in hardware. When complete, the system software will provide all of the functionality required to capture and process video data from multiple perspectives. Thereafter it will extract, edit and analyze the motions of finite targets and boundaries.

  4. A Semi-Automatic, Remote-Controlled Video Observation System for Transient Luminous Events

    DEFF Research Database (Denmark)

    Allin, Thomas Højgaard; Neubert, Torsten; Laursen, Steen

    2003-01-01

    at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over...

  5. A video surveillance system designed to detect multiple falls

    Directory of Open Access Journals (Sweden)

    Ming-Chih Chen

    2016-04-01

    Full Text Available This work presents a fall detection system that is based on image processing technology. The system can detect falling by various humans via analysis of video frame. First, the system utilizes the method of mixture and Gaussian background model to generate information about the background, and the noise and shadow of background are eliminated to extract the possible positions of moving objects. The extraction of a foreground image generates more noise and damage. Therefore, morphological and size filters are utilized to eliminate this noise and repair the damage to the image. Extraction of the foreground image yields the locations of human heads in the image. The median point, height, and aspect ratio of the people in the image are calculated. These characteristics are utilized to trace objects. The change of the characteristics of objects among various consecutive images can be used to evaluate those persons enter or leave the scene. The method of fall detection uses the height and aspect ratio of the human body, analyzes the image in which one person overlaps with another, and detects whether a human has fallen or not. Experimental results demonstrate that the proposed method can efficiently detect falls by multiple persons.

  6. Morphofunctional evaluation of end-to-side neurorrhaphy through video system magnification.

    Science.gov (United States)

    de Barros, Rui Sergio Monteiro; Brito, Marcus Vinicius Henriques; de Brito, Marcelo Houat; de Aguiar Lédo Coutinho, Jean Vitor; Teixeira, Renan Kleber Costa; Yamaki, Vitor Nagai; da Silva Costa, Felipe Lobato; Somensi, Danusa Neves

    2018-01-01

    The surgical microscope is an essential tool for microsurgery. Nonetheless, several promising alternatives are being developed, including endoscopes and laparoscopes with video systems. However, these alternatives have only been used for arterial anastomoses so far. The aim of this study was to evaluate the use of a low-cost video-assisted magnification system in end-to-side neurorrhaphy in rats. Forty rats were randomly divided into four matched groups: (1) normality (sciatic nerve was exposed but was kept intact); (2) denervation (fibular nerve was sectioned, and the proximal and distal stumps were sutured-transection without repair); (3) microscope; and (4) video system (fibular nerve was sectioned; the proximal stump was buried inside the adjacent musculature, and the distal stump was sutured to the tibial nerve). Microsurgical procedures were performed with guidance from a microscope or video system. We analyzed weight, nerve caliber, number of stitches, times required to perform the neurorrhaphy, muscle mass, peroneal functional indices, latency and amplitude, and numbers of axons. There were no significant differences in weight, nerve caliber, number of stitches, muscle mass, peroneal functional indices, or latency between microscope and video system groups. Neurorrhaphy took longer using the video system (P microscope group than in the video group. It is possible to perform an end-to-side neurorrhaphy in rats through video system magnification. The success rate is satisfactory and comparable with that of procedures performed under surgical microscopes. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Investigation of the performance of video analytics systems with compressed video using the i-LIDS sterile zone dataset

    Science.gov (United States)

    Mahendrarajah, Prashath

    2011-11-01

    Recent years have seen significant investment and increasingly effective use of Video Analytics (VA) systems to detect intrusion or attacks in sterile areas. Currently there are a number of manufacturers who have achieved the Imagery Library for Intelligent Detection System (i-LIDS) primary detection classification performance standard for the sterile zone detection scenario. These manufacturers have demonstrated the performance of their systems under evaluation conditions using an uncompressed evaluation video. In this paper we consider the effect on the detection rate of an i-LIDS primary approved sterile zone system using compressed sterile zone scenario video clips as the input. The preliminary test results demonstrate a change time of detection rate with compression as the time to alarm increased with greater compression. Initial experiments suggest that the detection performance does not linearly degrade as a function of compression ratio. These experiments form a starting point for a wider set of planned trials that the Home Office will carry out over the next 12 months.

  8. 78 FR 41950 - Certain Video Game Systems and Wireless Controllers and Components Thereof; Commission...

    Science.gov (United States)

    2013-07-12

    ..., ``Nintendo''). The products accused of infringing the asserted patents are gaming systems and related... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Wireless Controllers and Components Thereof; Commission...

  9. Panoramic Stereoscopic Video System for Remote-Controlled Robotic Space Operations Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this project, the development of a novel panoramic, stereoscopic video system was proposed. The proposed system, which contains no moving parts, uses three-fixed...

  10. Real-time Digital Video Watermark Embedding System based on Software in Commodity PC

    Science.gov (United States)

    Yamada, Takaaki; Echizen, Isao; Tezuka, Satoru; Yoshiura, Hiroshi

    Emerging broadband networks and high performance of PCs provide new business opportunities of the live video streaming services for the Internet users in sport events or in music concerts. Digital watermarking for video helps to protect the copyright of the video content and the real-time processing is an essential requirement. For the small start of new business, it should be achieved by flexible software without special equipments. This paper describes a novel real-time watermarking system implemented on a commodity PC. We propose the system architecture and methods to shorten watermarking time by reusing the estimated watermark imperceptibility among neighboring frames. A prototype system enables real time processing in a series of capturing NTSC signals, watermarking the video, encoding it to MPEG4 in QGVA, 1Mbps, 30fps style and storing the video for 12 hours in maximum

  11. A novel video recommendation system based on efficient retrieval of human actions

    Science.gov (United States)

    Ramezani, Mohsen; Yaghmaee, Farzin

    2016-09-01

    In recent years, fast growth of online video sharing eventuated new issues such as helping users to find their requirements in an efficient way. Hence, Recommender Systems (RSs) are used to find the users' most favorite items. Finding these items relies on items or users similarities. Though, many factors like sparsity and cold start user impress the recommendation quality. In some systems, attached tags are used for searching items (e.g. videos) as personalized recommendation. Different views, incomplete and inaccurate tags etc. can weaken the performance of these systems. Considering the advancement of computer vision techniques can help improving RSs. To this end, content based search can be used for finding items (here, videos are considered). In such systems, a video is taken from the user to find and recommend a list of most similar videos to the query one. Due to relating most videos to humans, we present a novel low complex scalable method to recommend videos based on the model of included action. This method has recourse to human action retrieval approaches. For modeling human actions, some interest points are extracted from each action and their motion information are used to compute the action representation. Moreover, a fuzzy dissimilarity measure is presented to compare videos for ranking them. The experimental results on HMDB, UCFYT, UCF sport and KTH datasets illustrated that, in most cases, the proposed method can reach better results than most used methods.

  12. Analysis of User Requirements in Interactive 3D Video Systems

    Directory of Open Access Journals (Sweden)

    Haiyue Yuan

    2012-01-01

    Full Text Available The recent development of three dimensional (3D display technologies has resulted in a proliferation of 3D video production and broadcasting, attracting a lot of research into capture, compression and delivery of stereoscopic content. However, the predominant design practice of interactions with 3D video content has failed to address its differences and possibilities in comparison to the existing 2D video interactions. This paper presents a study of user requirements related to interaction with the stereoscopic 3D video. The study suggests that the change of view, zoom in/out, dynamic video browsing, and textual information are the most relevant interactions with stereoscopic 3D video. In addition, we identified a strong demand for object selection that resulted in a follow-up study of user preferences in 3D selection using virtual-hand and ray-casting metaphors. These results indicate that interaction modality affects users’ decision of object selection in terms of chosen location in 3D, while user attitudes do not have significant impact. Furthermore, the ray-casting-based interaction modality using Wiimote can outperform the volume-based interaction modality using mouse and keyboard for object positioning accuracy.

  13. 47 CFR 63.02 - Exemptions for extensions of lines and for systems for the delivery of video programming.

    Science.gov (United States)

    2010-10-01

    ... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. ...

  14. 76 FR 23624 - In the Matter of Certain Video Game Systems and Wireless Controllers and Components Thereof...

    Science.gov (United States)

    2011-04-27

    ... COMMISSION In the Matter of Certain Video Game Systems and Wireless Controllers and Components Thereof... importation, and the sale within the United States after importation of certain video game systems and... importation of certain video game systems and wireless controllers and components thereof that infringe one or...

  15. A Review and Comparison of Measures for Automatic Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Baumann Axel

    2008-01-01

    Full Text Available Abstract Today's video surveillance systems are increasingly equipped with video content analysis for a great variety of applications. However, reliability and robustness of video content analysis algorithms remain an issue. They have to be measured against ground truth data in order to quantify the performance and advancements of new algorithms. Therefore, a variety of measures have been proposed in the literature, but there has neither been a systematic overview nor an evaluation of measures for specific video analysis tasks yet. This paper provides a systematic review of measures and compares their effectiveness for specific aspects, such as segmentation, tracking, and event detection. Focus is drawn on details like normalization issues, robustness, and representativeness. A software framework is introduced for continuously evaluating and documenting the performance of video surveillance systems. Based on many years of experience, a new set of representative measures is proposed as a fundamental part of an evaluation framework.

  16. A Review and Comparison of Measures for Automatic Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Jie Yu

    2008-10-01

    Full Text Available Today's video surveillance systems are increasingly equipped with video content analysis for a great variety of applications. However, reliability and robustness of video content analysis algorithms remain an issue. They have to be measured against ground truth data in order to quantify the performance and advancements of new algorithms. Therefore, a variety of measures have been proposed in the literature, but there has neither been a systematic overview nor an evaluation of measures for specific video analysis tasks yet. This paper provides a systematic review of measures and compares their effectiveness for specific aspects, such as segmentation, tracking, and event detection. Focus is drawn on details like normalization issues, robustness, and representativeness. A software framework is introduced for continuously evaluating and documenting the performance of video surveillance systems. Based on many years of experience, a new set of representative measures is proposed as a fundamental part of an evaluation framework.

  17. A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.

    Science.gov (United States)

    Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian

    2016-04-01

    Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable

  18. M-health medical video communication systems: an overview of design approaches and recent advances.

    Science.gov (United States)

    Panayides, A S; Pattichis, M S; Constantinides, A G; Pattichis, C S

    2013-01-01

    The emergence of the new, High Efficiency Video Coding (HEVC) standard, combined with wide deployment of 4G wireless networks, will provide significant support toward the adoption of mobile-health (m-health) medical video communication systems in standard clinical practice. For the first time since the emergence of m-health systems and services, medical video communication systems can be deployed that can rival the standards of in-hospital examinations. In this paper, we provide a thorough overview of today's advancements in the field, discuss existing approaches, and highlight the future trends and objectives.

  19. FPGA-Based Real-Time Motion Detection for Automated Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2016-03-01

    Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.

  20. Reward system and temporal pole contributions to affective evaluation during a first person shooter video game

    National Research Council Canada - National Science Library

    Mathiak, Krystyna A; Klasen, Martin; Weber, René; Ackermann, Hermann; Shergill, Sukhwinder S; Mathiak, Klaus

    2011-01-01

    .... It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes...

  1. A wireless video monitoring system based on 3G communication technology

    Science.gov (United States)

    Xia, Zhen-Hua; Wang, Xiao-Shuang

    2011-11-01

    With the rapid development of the electronic technology, multimedia technology and mobile communication technology, video monitoring system is going to the embedded, digital and wireless direction. In this paper, a solution of wireless video monitoring system based on WCDMA is proposed. This solution makes full use of the advantages of 3G, which have Extensive coverage network and wide bandwidth. It can capture the video streaming from the chip's video port, real-time encode the image data by the high speed DSP, and have enough bandwidth to transmit the monitoring image through WCDMA wireless network. The experiments demonstrate that the system has the advantages of high stability, good image quality, good transmission performance, and in addition, the system has been widely used, not be restricted by geographical position since it adopts wireless transmission. So, it is suitable used in sparsely populated, harsh environment scenario.

  2. Video-Based Systems Research, Analysis, and Applications Opportunities

    Science.gov (United States)

    1981-07-30

    classic films Ii- into separate FM signals for video dual soundtrack or stereo sound censed from nearlk every major stu- and audio. Another...though never disruptive. While my enthusiasm for the subject was distinctly lim- i’ed. I felt almost as if Iwere in the presence of a histori - cally

  3. A System based on Adaptive Background Subtraction Approach for Moving Object Detection and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Bahadır KARASULU

    2013-04-01

    Full Text Available Video surveillance systems are based on video and image processing research areas in the scope of computer science. Video processing covers various methods which are used to browse the changes in existing scene for specific video. Nowadays, video processing is one of the important areas of computer science. Two-dimensional videos are used to apply various segmentation and object detection and tracking processes which exists in multimedia content-based indexing, information retrieval, visual and distributed cross-camera surveillance systems, people tracking, traffic tracking and similar applications. Background subtraction (BS approach is a frequently used method for moving object detection and tracking. In the literature, there exist similar methods for this issue. In this research study, it is proposed to provide a more efficient method which is an addition to existing methods. According to model which is produced by using adaptive background subtraction (ABS, an object detection and tracking system’s software is implemented in computer environment. The performance of developed system is tested via experimental works with related video datasets. The experimental results and discussion are given in the study

  4. REAL-TIME UAV BASED GEOSPATIAL VIDEO INTEGRATED INTO THE FIRE BRIGADES CRISIS MANAGEMENT GIS SYSTEM

    Directory of Open Access Journals (Sweden)

    M. van Persie

    2012-09-01

    Full Text Available During a fire incident live airborne video offers the fire brigade an additional means of information. Essential for the effective usage of the daylight and infra red video data from the UAS is that the information is fully integrated into the crisis management system of the fire brigade. This is a GIS based system in which all relevant geospatial information is brought together and automatically distributed to all levels of the organisation. In the context of the Dutch Fire-Fly project a geospatial video server was integrated with a UAS and the fire brigades crisis management system, so that real-time geospatial airborne video and derived products can be made available at all levels during a fire incident. The most important elements of the system are the Delftdynamics Robot Helicopter, the Video Multiplexing System, the Keystone geospatial video server/editor and the Eagle and CCS-M crisis management systems. In discussion with the Security Region North East Gelderland user requirements and a concept of operation were defined, demonstrated and evaluated. This article describes the technical and operational approach and results.

  5. Development and application of remote video monitoring system for combine harvester based on embedded Linux

    Science.gov (United States)

    Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui

    2017-01-01

    Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.

  6. Realization on the interactive remote video conference system based on multi-Agent

    Directory of Open Access Journals (Sweden)

    Zheng Yan

    2016-01-01

    Full Text Available To make people at different places participate in the same conference, speak and discuss freely, the interactive remote video conferencing system is designed and realized based on multi-Agent collaboration. FEC (forward error correction and tree P2P technology are firstly used to build a live conference structure to transfer audio and video data; then the branch conference port can participate to speak and discuss through the application of becoming a interactive focus; the introduction of multi-Agent collaboration technology improve the system robustness. The experiments showed that, under normal network conditions, the system can support 350 branch conference node simultaneously to make live broadcasting. The audio and video quality is smooth. It can carry out large-scale remote video conference.

  7. ESVD: An Integrated Energy Scalable Framework for Low-Power Video Decoding Systems

    Directory of Open Access Journals (Sweden)

    Wen Ji

    2010-01-01

    Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.

  8. A professional and cost effective digital video editing and image storage system for the operating room.

    Science.gov (United States)

    Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N

    2007-06-01

    We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.

  9. Remote video radioactive systems evaluation, Savannah River Site

    Energy Technology Data Exchange (ETDEWEB)

    Heckendorn, F.M.; Robinson, C.W.

    1991-12-31

    Specialized miniature low cost video equipment has been effectively used in a number of remote, radioactive, and contaminated environments at the Savannah River Site (SRS). The equipment and related techniques have reduced the potential for personnel exposure to both radiation and physical hazards. The valuable process information thus provided would not have otherwise been available for use in improving the quality of operation at SRS.

  10. Improving photometric calibration of meteor video camera systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-09-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag , and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  11. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  12. Localization of cask and plug remote handling system in ITER using multiple video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, João, E-mail: jftferreira@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2013-10-15

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.

  13. Optimal use of video for teaching the practical implications of studying business information systems

    DEFF Research Database (Denmark)

    Fog, Benedikte; Ulfkjær, Jacob Kanneworff Stigsen; Schlichter, Bjarne Rerup

    not sufficiently reflect the theoretical recommendations of using video optimally in a management education. It did not comply with the video learning sequence as introduced by Marx and Frost (1998). However, it questions if the level of cognitive orientation activities can become too extensive. It finds......The study of business information systems has become increasingly important in the Digital Economy. However, it has been found that students have difficulties understanding the practical implications thereof and this leads to a motivational decreases. This study aims to investigate how to optimize...... the use of video to increase comprehension of the practical implications of studying business information systems. This qualitative study is based on observations and focus group interviews with first semester business students. The findings suggest that the video examined in the case study did...

  14. Network-aware scalable video monitoring system for emergency situations with operator-managed fidelity control

    Science.gov (United States)

    Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos

    2014-05-01

    In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier

  15. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  16. Data Reduction and Control Software for Meteor Observing Stations Based on CCD Video Systems

    Science.gov (United States)

    Madiedo, J. M.; Trigo-Rodriguez, J. M.; Lyytinen, E.

    2011-01-01

    The SPanish Meteor Network (SPMN) is performing a continuous monitoring of meteor activity over Spain and neighbouring countries. The huge amount of data obtained by the 25 video observing stations that this network is currently operating made it necessary to develop new software packages to accomplish some tasks, such as data reduction and remote operation of autonomous systems based on high-sensitivity CCD video devices. The main characteristics of this software are described here.

  17. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Directory of Open Access Journals (Sweden)

    Chen Homer H

    2007-01-01

    Full Text Available The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  18. Learning neuroendoscopy with an exoscope system (video telescopic operating monitor): Early clinical results.

    Science.gov (United States)

    Parihar, Vijay; Yadav, Y R; Kher, Yatin; Ratre, Shailendra; Sethi, Ashish; Sharma, Dhananjaya

    2016-01-01

    Steep learning curve is found initially in pure endoscopic procedures. Video telescopic operating monitor (VITOM) is an advance in rigid-lens telescope systems provides an alternative method for learning basics of neuroendoscopy with the help of the familiar principle of microneurosurgery. The aim was to evaluate the clinical utility of VITOM as a learning tool for neuroendoscopy. Video telescopic operating monitor was used 39 cranial and spinal procedures and its utility as a tool for minimally invasive neurosurgery and neuroendoscopy for initial learning curve was studied. Video telescopic operating monitor was used in 25 cranial and 14 spinal procedures. Image quality is comparable to endoscope and microscope. Surgeons comfort improved with VITOM. Frequent repositioning of scope holder and lack of stereopsis is initial limiting factor was compensated for with repeated procedures. Video telescopic operating monitor is found useful to reduce initial learning curve of neuroendoscopy.

  19. Evaluation of Distance Education System for Adult Education Using 4 Video Transmissions

    OpenAIRE

    渡部, 和雄; 湯瀬, 裕昭; 渡邉, 貴之; 井口, 真彦; 藤田, 広一

    2004-01-01

    The authors have developed a distance education system for interactive education which can transmit 4 video streams between distant lecture rooms. In this paper, we describe the results of our experiments using the system for adult education. We propose some efficient ways to use the system for adult education.

  20. A practical implementation of free viewpoint video system for soccer games

    Science.gov (United States)

    Suenaga, Ryo; Suzuki, Kazuyoshi; Tezuka, Tomoyuki; Panahpour Tehrani, Mehrdad; Takahashi, Keita; Fujii, Toshiaki

    2015-03-01

    In this paper, we present a free viewpoint video generation system with billboard representation for soccer games. Free viewpoint video generation is a technology that enables users to watch 3-D objects from their desired viewpoints. Practical implementation of free viewpoint video for sports events is highly demanded. However, a commercially acceptable system has not yet been developed. The main obstacles are insufficient user-end quality of the synthesized images and highly complex procedures that sometimes require manual operations. In this work, we aim to develop a commercially acceptable free viewpoint video system with a billboard representation. A supposed scenario is that soccer games during the day can be broadcasted in 3-D, even in the evening of the same day. Our work is still ongoing. However, we have already developed several techniques to support our goal. First, we captured an actual soccer game at an official stadium where we used 20 full-HD professional cameras. Second, we have implemented several tools for free viewpoint video generation as follow. In order to facilitate free viewpoint video generation, all cameras should be calibrated. We calibrated all cameras using checker board images and feature points on the field (cross points of the soccer field lines). We extract each player region from captured images manually. The background region is estimated by observing chrominance changes of each pixel in temporal domain (automatically). Additionally, we have developed a user interface for visualizing free viewpoint video generation using a graphic library (OpenGL), which is suitable for not only commercialized TV sets but also devices such as smartphones. However, practical system has not yet been completed and our study is still ongoing.

  1. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also

  2. Video Feedforward for Rapid Learning of a Picture-Based Communication System

    Science.gov (United States)

    Smith, Jemma; Hand, Linda; Dowrick, Peter W.

    2014-01-01

    This study examined the efficacy of video self modeling (VSM) using feedforward, to teach various goals of a picture exchange communication system (PECS). The participants were two boys with autism and one man with Down syndrome. All three participants were non-verbal with no current functional system of communication; the two children had long…

  3. Video-rate two-photon excited fluorescence lifetime imaging system with interleaved digitization.

    Science.gov (United States)

    Dow, Ximeng Y; Sullivan, Shane Z; Muir, Ryan D; Simpson, Garth J

    2015-07-15

    A fast (up to video rate) two-photon excited fluorescence lifetime imaging system based on interleaved digitization is demonstrated. The system is compatible with existing beam-scanning microscopes with minor electronics and software modification. Proof-of-concept demonstrations were performed using laser dyes and biological tissue.

  4. Optical tracking of embryonic vertebrates behavioural responses using automated time-resolved video-microscopy system

    Science.gov (United States)

    Walpitagama, Milanga; Kaslin, Jan; Nugegoda, Dayanthi; Wlodkowic, Donald

    2016-12-01

    The fish embryo toxicity (FET) biotest performed on embryos of zebrafish (Danio rerio) has gained significant popularity as a rapid and inexpensive alternative approach in chemical hazard and risk assessment. The FET was designed to evaluate acute toxicity on embryonic stages of fish exposed to the test chemical. The current standard, similar to most traditional methods for evaluating aquatic toxicity provides, however, little understanding of effects of environmentally relevant concentrations of chemical stressors. We postulate that significant environmental effects such as altered motor functions, physiological alterations reflected in heart rate, effects on development and reproduction can occur at sub-lethal concentrations well below than LC10. Behavioral studies can, therefore, provide a valuable integrative link between physiological and ecological effects. Despite the advantages of behavioral analysis development of behavioral toxicity, biotests is greatly hampered by the lack of dedicated laboratory automation, in particular, user-friendly and automated video microscopy systems. In this work we present a proof-of-concept development of an optical system capable of tracking embryonic vertebrates behavioral responses using automated and vastly miniaturized time-resolved video-microscopy. We have employed miniaturized CMOS cameras to perform high definition video recording and analysis of earliest vertebrate behavioral responses. The main objective was to develop a biocompatible embryo positioning structures that were suitable for high-throughput imaging as well as video capture and video analysis algorithms. This system should support the development of sub-lethal and behavioral markers for accelerated environmental monitoring.

  5. A Semi-Automatic, Remote-Controlled Video Observation System for Transient Luminous Events

    Science.gov (United States)

    Allin, T.; Neubert, T.; Laursen, S.; Rasmussen, I. L.; Soula, S.

    2003-12-01

    In support for global ELF/VLF observations, HF measurements in France, and conjugate photometry/VLF observations in South Africa, we developed and operated a semi-automatic, remotely controlled video system for the observation of middle-atmospheric transient luminous events (TLEs). Installed at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over serial links from a local computer, and the video outputs were distributed to a pair of PCI frame grabbers in the computer. This setup allowed remote users to log in and operate the system over the internet. Event detection software provided means of recording and time-stamping single TLE video fields and thus eliminated the need for continuous human monitoring of TLE activity. The computer recorded and analyzed two parallel video streams at the full 50 Hz field rate, while uploading status images, TLE images, and system logs to a remote web server. The system detected more than 130 TLEs - mostly sprites - distributed over 9 active evenings. We have thus demonstrated the feasibility of remote agents for TLE observations, which are likely to find use in future ground-based TLE observation campaigns, or to be installed at remote sites in support for space-borne or other global TLE observation efforts.

  6. A unified framework for capturing facial images in video surveillance systems using cooperative camera system

    Science.gov (United States)

    Chan, Fai; Moon, Yiu-Sang; Chen, Jiansheng; Ma, Yiu-Kwan; Tsang, Wai-Hung; Fu, Kah-Kuen

    2008-04-01

    Low resolution and un-sharp facial images are always captured from surveillance videos because of long human-camera distance and human movements. Previous works addressed this problem by using an active camera to capture close-up facial images without considering human movements and mechanical delays of the active camera. In this paper, we proposed a unified framework to capture facial images in video surveillance systems by using one static and active camera in a cooperative manner. Human faces are first located by a skin-color based real-time face detection algorithm. A stereo camera model is also employed to approximate human face location and his/her velocity with respect to the active camera. Given the mechanical delays of the active camera, the position of a target face with a given delay can be estimated using a Human-Camera Synchronization Model. By controlling the active camera with corresponding amount of pan, tilt, and zoom, a clear close-up facial image of a moving human can be captured then. We built the proposed system in an 8.4-meter indoor corridor. Results show that the proposed stereo camera configuration can locate faces with average error of 3%. In addition, it is capable of capturing facial images of a walking human clearly in first instance in 90% of the test cases.

  7. Exploring Online Learning at Primary Schools: Students' Perspectives on Cyber Home Learning System through Video Conferencing (CHLS-VC)

    Science.gov (United States)

    Lee, June; Yoon, Seo Young; Lee, Chung Hyun

    2013-01-01

    The purposes of the study are to investigate CHLS (Cyber Home Learning System) in online video conferencing environment in primary school level and to explore the students' responses on CHLS-VC (Cyber Home Learning System through Video Conferencing) in order to explore the possibility of using CHLS-VC as a supportive online learning system. The…

  8. VIPER: a general-purpose digital image-processing system applied to video microscopy.

    Science.gov (United States)

    Brunner, M; Ittner, W

    1988-01-01

    This paper describes VIPER, the video image-processing system Erlangen. It consists of a general purpose microcomputer, commercially available image-processing hardware modules connected directly to the computer, video input/output-modules such as a TV camera, video recorders and monitors, and a software package. The modular structure and the capabilities of this system are explained. The software is user-friendly, menu-driven and performs image acquisition, transfers, greyscale processing, arithmetics, logical operations, filtering display, colour assignment, graphics, and a couple of management functions. More than 100 image-processing functions are implemented. They are available either by typing a key or by a simple call to the function-subroutine library in application programs. Examples are supplied in the area of biomedical research, e.g. in in-vivo microscopy.

  9. A Study of Vehicle Detection and Counting System Based on Video

    Directory of Open Access Journals (Sweden)

    Shuang XU

    2014-10-01

    Full Text Available About the video image processing's vehicle detection and counting system research, which has video vehicle detection, vehicle targets' image processing, and vehicle counting function. Vehicle detection is the use of inter-frame difference method and vehicle shadow segmentation techniques for vehicle testing. Image processing functions is the use of color image gray processing, image segmentation, mathematical morphology analysis and image fills, etc. on target detection to be processed, and then the target vehicle extraction. Counting function is to count the detected vehicle. The system is the use of inter-frame video difference method to detect vehicle and the use of the method of adding frame to vehicle and boundary comparison method to complete the counting function, with high recognition rate, fast, and easy operation. The purpose of this paper is to enhance traffic management modernization and automation levels. According to this study, it can provide a reference for the future development of related applications.

  10. The everyday lives of video game developers: Experimentally understanding underlying systems/structures

    Directory of Open Access Journals (Sweden)

    Casey O'Donnell

    2009-03-01

    Full Text Available This essay examines how tensions between work and play for video game developers shape the worlds they create. The worlds of game developers, whose daily activity is linked to larger systems of experimentation and technoscientific practice, provide insights that transcend video game development work. The essay draws on ethnographic material from over 3 years of fieldwork with video game developers in the United States and India. It develops the notion of creative collaborative practice based on work in the fields of science and technology studies, game studies, and media studies. The importance of, the desire for, or the drive to understand underlying systems and structures has become fundamental to creative collaborative practice. I argue that the daily activity of game development embodies skills fundamental to creative collaborative practice and that these capabilities represent fundamental aspects of critical thought. Simultaneously, numerous interests have begun to intervene in ways that endanger these foundations of creative collaborative practice.

  11. APPLICATION OF BINARY DESCRIPTORS TO MULTIPLE FACE TRACKING IN VIDEO SURVEILLANCE SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. L. Oleinik

    2016-07-01

    Full Text Available Subject of Research. The paper deals with the problem of multiple face tracking in a video stream. The primary application of the implemented tracking system is the automatic video surveillance. The particular operating conditions of surveillance cameras are taken into account in order to increase the efficiency of the system in comparison to existing general-purpose analogs. Method. The developed system is comprised of two subsystems: detector and tracker. The tracking subsystem does not depend on the detector, and thus various face detection methods can be used. Furthermore, only a small portion of frames is processed by the detector in this structure, substantially improving the operation rate. The tracking algorithm is based on BRIEF binary descriptors that are computed very efficiently on modern processor architectures. Main Results. The system is implemented in C++ and the experiments on the processing rate and quality evaluation are carried out. MOTA and MOTP metrics are used for tracking quality measurement. The experiments demonstrated the four-fold processing rate gain in comparison to the baseline implementation that processes every video frame with the detector. The tracking quality is on the adequate level when compared to the baseline. Practical Relevance. The developed system can be used with various face detectors (including slow ones to create a fully functional high-speed multiple face tracking solution. The algorithm is easy to implement and optimize, so it may be applied not only in full-scale video surveillance systems, but also in embedded solutions integrated directly into cameras.

  12. Comparative study of motion detection methods for video surveillance systems

    Science.gov (United States)

    Sehairi, Kamal; Chouireb, Fatima; Meunier, Jean

    2017-03-01

    The objective of this study is to compare several change detection methods for a monostatic camera and identify the best method for different complex environments and backgrounds in indoor and outdoor scenes. To this end, we used the CDnet video dataset as a benchmark that consists of many challenging problems, ranging from basic simple scenes to complex scenes affected by bad weather and dynamic backgrounds. Twelve change detection methods, ranging from simple temporal differencing to more sophisticated methods, were tested and several performance metrics were used to precisely evaluate the results. Because most of the considered methods have not previously been evaluated on this recent large scale dataset, this work compares these methods to fill a lack in the literature, and thus this evaluation joins as complementary compared with the previous comparative evaluations. Our experimental results show that there is no perfect method for all challenging cases; each method performs well in certain cases and fails in others. However, this study enables the user to identify the most suitable method for his or her needs.

  13. Use of gait parameters of persons in video surveillance systems

    Science.gov (United States)

    Geradts, Zeno J.; Merlijn, Menno; de Groot, Gert; Bijhold, Jurrien

    2002-07-01

    The gait parameters of eleven subjects were evaluated to provide data for recognition purposes of subjects. Video images of these subjects were acquired in frontal, transversal, and sagittal (a plane parallel to the median of the body) view. The subjects walked by at their usual walking speed. The measured parameters were hip, knee and ankle joint angle and their time averaged values, thigh, foot and trunk angle, step length and width, cycle time and walking speed. Correlation coefficients within and between subjects for the hip, knee and ankle rotation pattern in the sagittal aspect and for the trunk rotation pattern in the transversal aspect were almost similar. (were similar or were almost identical) This implies that the intra and inter individual variance were equal. Therefore, these gait parameters could not distinguish between subjects. A simple ANOVA with a follow-up test was used to detect significant differences for the mean hip, knee and ankle joint angle, thigh angle, step length, step width, walking speed, cycle time and foot angle. The number of significant differences between subjects defined the usefulness of the gait parameter. The parameter with the most significant difference between subjects was the foot angle (64 % - 73 % of the maximal attainable significant differences), followed by the time average hip joint angle (58 %) and the step length (45 %). The other parameters scored less than 25 %, which is poor for recognition purposes. The use of gait for identification purposes it not yet possible based on this research.

  14. Evaluation of the educational value of YouTube videos about physical examination of the cardiovascular and respiratory systems.

    Science.gov (United States)

    Azer, Samy A; Algrain, Hala A; AlKhelaif, Rana A; AlEshaiwi, Sarah M

    2013-11-13

    A number of studies have evaluated the educational contents of videos on YouTube. However, little analysis has been done on videos about physical examination. This study aimed to analyze YouTube videos about physical examination of the cardiovascular and respiratory systems. It was hypothesized that the educational standards of videos on YouTube would vary significantly. During the period from November 2, 2011 to December 2, 2011, YouTube was searched by three assessors for videos covering the clinical examination of the cardiovascular and respiratory systems. For each video, the following information was collected: title, authors, duration, number of viewers, and total number of days on YouTube. Using criteria comprising content, technical authority, and pedagogy parameters, videos were rated independently by three assessors and grouped into educationally useful and non-useful videos. A total of 1920 videos were screened. Only relevant videos covering the examination of adults in the English language were identified (n=56). Of these, 20 were found to be relevant to cardiovascular examinations and 36 to respiratory examinations. Further analysis revealed that 9 provided useful information on cardiovascular examinations and 7 on respiratory examinations: scoring mean 14.9 (SD 0.33) and mean 15.0 (SD 0.00), respectively. The other videos, 11 covering cardiovascular and 29 on respiratory examinations, were not useful educationally, scoring mean 11.1 (SD 1.08) and mean 11.2 (SD 1.29), respectively. The differences between these two categories were significant (P.86. A small number of videos about physical examination of the cardiovascular and respiratory systems were identified as educationally useful; these videos can be used by medical students for independent learning and by clinical teachers as learning resources. The scoring system utilized by this study is simple, easy to apply, and could be used by other researchers on similar topics.

  15. Design of video surveillance and tracking system based on attitude and heading reference system and PTZ camera

    Science.gov (United States)

    Yang, Jian; Xie, Xiaofang; Wang, Yan

    2017-04-01

    Based on the AHRS (Attitude and Heading Reference System) and PTZ (Pan/Tilt/Zoom) camera, we designed a video monitoring and tracking system. The overall structure of the system and the software design are given. The key technologies such as serial port communication and head attitude tracking are introduced, and the codes of the key part are given.

  16. Video-based real-time on-street parking occupancy detection system

    Science.gov (United States)

    Bulan, Orhan; Loce, Robert P.; Wu, Wencheng; Wang, YaoRong; Bernal, Edgar A.; Fan, Zhigang

    2013-10-01

    Urban parking management is receiving significant attention due to its potential to reduce traffic congestion, fuel consumption, and emissions. Real-time parking occupancy detection is a critical component of on-street parking management systems, where occupancy information is relayed to drivers via smart phone apps, radio, Internet, on-road signs, or global positioning system auxiliary signals. Video-based parking occupancy detection systems can provide a cost-effective solution to the sensing task while providing additional functionality for traffic law enforcement and surveillance. We present a video-based on-street parking occupancy detection system that can operate in real time. Our system accounts for the inherent challenges that exist in on-street parking settings, including illumination changes, rain, shadows, occlusions, and camera motion. Our method utilizes several components from video processing and computer vision for motion detection, background subtraction, and vehicle detection. We also present three traffic law enforcement applications: parking angle violation detection, parking boundary violation detection, and exclusion zone violation detection, which can be integrated into the parking occupancy cameras as a value-added option. Our experimental results show that the proposed parking occupancy detection method performs in real-time at 5 frames/s and achieves better than 90% detection accuracy across several days of videos captured in a busy street block under various weather conditions such as sunny, cloudy, and rainy, among others.

  17. Chemical Detection Architecture for a Subway System [video

    OpenAIRE

    Ignacio, Joselito; Center for Homeland Defense and Security Naval Postgraduate School

    2014-01-01

    This proposed system process aims to improve subway safety through better enabling the rapid detection and response to a chemical release in a subway system. The process is designed to be location-independent and generalized to most subway systems despite each system's unique characteristics.

  18. Mining data on usage of electronic nicotine delivery systems (ENDS) from YouTube videos.

    Science.gov (United States)

    Hua, My; Yip, Henry; Talbot, Prue

    2013-03-01

    The objective was to analyse and compare puff and exhalation duration for individuals using electronic nicotine delivery systems (ENDS) and conventional cigarettes in YouTube videos. Video data from YouTube videos were analysed to quantify puff duration and exhalation duration during use of conventional tobacco-containing cigarettes and ENDS. For ENDS, comparisons were also made between 'advertisers' and 'non-advertisers', genders, brands of ENDS, and models of ENDS within one brand. Puff duration (mean =2.4 s) for conventional smokers in YouTube videos (N=9) agreed well with prior publications. Puff duration was significantly longer for ENDS users (mean =4.3 s) (N = 64) than for conventional cigarette users, and puff duration varied significantly among ENDS brands. For ENDS users, puff duration and exhalation duration were not significantly affected by 'advertiser' status, gender or variation in models within a brand. Men outnumbered women by about 5:1, and most users were between 19 and 35 years of age. YouTube videos provide a valuable resource for studying ENDS usage. Longer puff duration may help ENDS users compensate for the apparently poor delivery of nicotine from ENDS. As with conventional cigarette smoking, ENDS users showed a large variation in puff duration (range =1.9-8.3 s). ENDS puff duration should be considered when designing laboratory and clinical trials and in developing a standard protocol for evaluating ENDS performance.

  19. Developing Agent-Oriented Video Surveillance System through Agent-Oriented Methodology (AOM

    Directory of Open Access Journals (Sweden)

    Cheah Wai Shiang

    2016-12-01

    Full Text Available Agent-oriented methodology (AOM is a comprehensive and unified agent methodology for agent-oriented software development. Although AOM is claimed to be able to cope with a complex system development, it is still not yet determined up to what extent this may be true. Therefore, it is vital to conduct an investigation to validate this methodology. This paper presents the adoption of AOM in developing an agent-oriented video surveillance system (VSS. An intruder handling scenario is designed and implemented through AOM. AOM provides an alternative method to engineer a distributed security system in a systematic manner. It presents the security system at a holistic view; provides a better conceptualization of agent-oriented security system and supports rapid prototyping as well as simulation of video surveillance system.

  20. Video camera system for locating bullet holes in targets at a ballistics tunnel

    Science.gov (United States)

    Burner, A. W.; Rummler, D. R.; Goad, W. K.

    1990-01-01

    A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

  1. 76 FR 75911 - Certain Video Game Systems and Controllers; Investigations: Terminations, Modifications and Rulings

    Science.gov (United States)

    2011-12-05

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Controllers; Investigations: Terminations, Modifications and Rulings AGENCY: U.S. International Trade Commission. ACTION: Notice. Section 337 of the Tariff Act of 1930...

  2. 78 FR 57414 - Certain Video Game Systems and Wireless Controllers and Components Thereof, Commission...

    Science.gov (United States)

    2013-09-18

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Wireless Controllers and Components Thereof, Commission Determination Finding No Violation of the Tariff Act of 1930 AGENCY: U.S. International Trade Commission. ACTION...

  3. An Investigation of the Feasibility of a Video Game System for Developing Scanning and Selection Skills.

    Science.gov (United States)

    Horn, Eva; And Others

    1991-01-01

    Three nonvocal students (ages 5-8) with severe physical handicaps were trained in scan and selection responses (similar to responses needed for operating augmentative communication systems) using a microcomputer-operated video-game format. Results indicated that all three children showed substantial increases in the number of correct responses and…

  4. Extended Attention Span Training System: Video Game Neurotherapy for Attention Deficit Disorder.

    Science.gov (United States)

    Pope, Alan T.; Bogart, Edward H.

    1996-01-01

    Describes the Extended Attention Span Training (EAST) system for modifying attention deficits, which takes the concept of biofeedback one step further by making a video game more difficult as the player's brain waves indicate that attention is waning. Notes contributions of this technology to neuropsychology and neurology, where the emphasis is on…

  5. A highly sensitive underwater video system for use in turbid aquaculture ponds

    Science.gov (United States)

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.

    2016-08-01

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.

  6. A System for Reflective Learning Using Handwriting Tablet Devices for Real-Time Event Bookmarking into Simultaneously Recorded Videos

    Science.gov (United States)

    Nakajima, Taira

    2012-01-01

    The author demonstrates a new system useful for reflective learning. Our new system offers an environment that one can use handwriting tablet devices to bookmark symbolic and descriptive feedbacks into simultaneously recorded videos in the environment. If one uses video recording and feedback check sheets in reflective learning sessions, one can…

  7. 77 FR 58577 - Certain Video Game Systems and Wireless Controllers and Components Thereof; Notice of Request for...

    Science.gov (United States)

    2012-09-21

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Wireless Controllers and Components Thereof; Notice of Request for... limited exclusion order and a cease and desist order against certain video game systems and wireless...

  8. A video-based eye pupil detection system for diagnosing bipolar disorder

    OpenAIRE

    AKINCI, Gökay; Polat, Ediz; Koçak, Orhan Murat

    2012-01-01

    Eye pupil detection systems have become increasingly popular in image processing and computer vision applications in medical systems. In this study, a video-based eye pupil detection system is developed for diagnosing bipolar disorder. Bipolar disorder is a condition in which people experience changes in cognitive processes and abilities, including reduced attentional and executive capabilities and impaired memory. In order to detect these abnormal behaviors, a number of neuropsychologi...

  9. The Use of Video Modeling via a Video iPod and a System of Least Prompts to Improve Transitional Behaviors for Students with Autism Spectrum Disorders in the General Education Classroom

    Science.gov (United States)

    Cihak, David; Fahrenkrog, Cynthia; Ayres, Kevin M.; Smith, Catherine

    2010-01-01

    This study evaluated the efficacy of video modeling delivered via a handheld device (video iPod) and the use of the system of least prompts to assist elementary-age students with transitioning between locations and activities within the school. Four students with autism learned to manipulate a handheld device to watch video models. An ABAB…

  10. Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems

    Science.gov (United States)

    Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien

    2017-09-01

    Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.

  11. Portable digital video surveillance system for monitoring flower-visiting bumblebees

    Directory of Open Access Journals (Sweden)

    Thorsdatter Orvedal Aase, Anne Lene

    2011-08-01

    Full Text Available In this study we used a portable event-triggered video surveillance system for monitoring flower-visiting bumblebees. The system consist of mini digital recorder (mini-DVR with a video motion detection (VMD sensor which detects changes in the image captured by the camera, the intruder triggers the recording immediately. The sensitivity and the detection area are adjustable, which may prevent unwanted recordings. To our best knowledge this is the first study using VMD sensor to monitor flower-visiting insects. Observation of flower-visiting insects has traditionally been monitored by direct observations, which is time demanding, or by continuous video monitoring, which demands a great effort in reviewing the material. A total of 98.5 monitoring hours were conducted. For the mini-DVR with VMD, a total of 35 min were spent reviewing the recordings to locate 75 pollinators, which means ca. 0.35 sec reviewing per monitoring hr. Most pollinators in the order Hymenoptera were identified to species or group level, some were only classified to family (Apidae or genus (Bombus. The use of the video monitoring system described in the present paper could result in a more efficient data sampling and reveal new knowledge to pollination ecology (e.g. species identification and pollinating behaviour.

  12. A combined stereo-photogrammetry and underwater-video system to study group composition of dolphins

    Science.gov (United States)

    Bräger, S.; Chong, A.; Dawson, S.; Slooten, E.; Würsig, B.

    1999-11-01

    One reason for the paucity of knowledge of dolphin social structure is the difficulty of measuring individual dolphins. In Hector's dolphins, Cephalorhynchus hectori, total body length is a function of age, and sex can be determined by individual colouration pattern. We developed a novel system combining stereo-photogrammetry and underwater-video to record dolphin group composition. The system consists of two downward-looking single-lens-reflex (SLR) cameras and a Hi8 video camera in an underwater housing mounted on a small boat. Bow-riding Hector's dolphins were photographed and video-taped at close range in coastal waters around the South Island of New Zealand. Three-dimensional, stereoscopic measurements of the distance between the blowhole and the anterior margin of the dorsal fin (BH-DF) were calibrated by a suspended frame with reference points. Growth functions derived from measurements of 53 dead Hector's dolphins (29 female : 24 male) provided the necessary reference data. For the analysis, the measurements were synchronised with corresponding underwater-video of the genital area. A total of 27 successful measurements (8 with corresponding sex) were obtained, showing how this new system promises to be potentially useful for cetacean studies.

  13. The design of multiplayer online video game systems

    Science.gov (United States)

    Hsu, Chia-chun A.; Ling, Jim; Li, Qing; Kuo, C.-C. J.

    2003-11-01

    The distributed Multiplayer Online Game (MOG) system is complex since it involves technologies in computer graphics, multimedia, artificial intelligence, computer networking, embedded systems, etc. Due to the large scope of this problem, the design of MOG systems has not yet been widely addressed in the literatures. In this paper, we review and analyze the current MOG system architecture followed by evaluation. Furthermore, we propose a clustered-server architecture to provide a scalable solution together with the region oriented allocation strategy. Two key issues, i.e. interesting management and synchronization, are discussed in depth. Some preliminary ideas to deal with the identified problems are described.

  14. Digital holographic video service system for natural color scene

    Science.gov (United States)

    Seo, Young-Ho; Lee, Yoon-Hyuk; Koo, Ja-Myung; Kim, Woo-Youl; Yoo, Ji-Sang; Kim, Dong-Wook

    2013-11-01

    We propose a new system that can generate digital holograms using natural color information. The system consists of a camera system for capturing images (object points) and software (S/W) for various image processing. The camera system uses a vertical rig, which is equipped with two depth and RGB cameras and a cold mirror, which has different reflectances according to wavelength for obtaining images with the same viewpoint. The S/W is composed of the engines for processing the captured images and executing computer-generated hologram for generating digital holograms using general-purpose graphics processing units. Each algorithm was implemented using C/C++ and CUDA languages, and all engines in the form of library were integrated in LabView environment. The proposed system can generate about 10 digital holographic frames per second using about 6 K object points.

  15. [Video recording system of endoscopic procedures for digital forensics].

    Science.gov (United States)

    Endo, Chiaki; Sakurada, A; Kondo, T

    2009-07-01

    Recently, endoscopic procedures including surgery, intervention, and examination have been widely performed. Medical practitioners are required to record the procedures precisely in order to check the procedures retrospectively and to get the legally reliable record. Medical Forensic System made by KS Olympus Japan offers 2 kinds of movie and patient's data, such as heart rate, blood pressure, and Spo, which are simultaneously recorded. We installed this system into the bronchoscopy room and have experienced its benefit. Under this system, we can get bronchoscopic image, bronchoscopy room view, and patient's data simultaneously. We can check the quality of the bronchoscopic procedures retrospectively, which is useful for bronchoscopy staff training. Medical Forensic System should be installed in any kind of endoscopic procedures.

  16. A video based feedback system for control of an active commutator during behavioral physiology.

    Science.gov (United States)

    Roh, Mootaek; McHugh, Thomas J; Lee, Kyungmin

    2015-10-12

    To investigate the relationship between neural function and behavior it is necessary to record neuronal activity in the brains of freely behaving animals, a technique that typically involves tethering to a data acquisition system. Optimally this approach allows animals to behave without any interference of movement or task performance. Currently many laboratories in the cognitive and behavioral neuroscience fields employ commercial motorized commutator systems using torque sensors to detect tether movement induced by the trajectory behaviors of animals. In this study we describe a novel motorized commutator system which is automatically controlled by video tracking. To obtain accurate head direction data two light emitting diodes were used and video image noise was minimized by physical light source manipulation. The system calculates the rotation of the animal across a single trial by processing head direction data and the software, which calibrates the motor rotation angle, subsequently generates voltage pulses to actively untwist the tether. This system successfully provides a tether twist-free environment for animals performing behavioral tasks and simultaneous neural activity recording. To the best of our knowledge, it is the first to utilize video tracking generated head direction to detect tether twisting and compensate with a motorized commutator system. Our automatic commutator control system promises an affordable and accessible method to improve behavioral neurophysiology experiments, particularly in mice.

  17. Cost-Effective Video Filtering Solution for Real-Time Vision Systems

    Directory of Open Access Journals (Sweden)

    Karl Martin

    2005-08-01

    Full Text Available This paper presents an efficient video filtering scheme and its implementation in a field-programmable logic device (FPLD. Since the proposed nonlinear, spatiotemporal filtering scheme is based on order statistics, its efficient implementation benefits from a bit-serial realization. The utilization of both the spatial and temporal correlation characteristics of the processed video significantly increases the computational demands on this solution, and thus, implementation becomes a significant challenge. Simulation studies reported in this paper indicate that the proposed pipelined bit-serial FPLD filtering solution can achieve speeds of up to 97.6 Mpixels/s and consumes 1700 to 2700 logic cells for the speed-optimized and area-optimized versions, respectively. Thus, the filter area represents only 6.6 to 10.5% of the Altera STRATIX EP1S25 device available on the Altera Stratix DSP evaluation board, which has been used to implement a prototype of the entire real-time vision system. As such, the proposed adaptive video filtering scheme is both practical and attractive for real-time machine vision and surveillance systems as well as conventional video and multimedia applications.

  18. Face recognition-based authentication and monitoring in video telecommunication systems

    OpenAIRE

    2012-01-01

    M.Sc. (Computer Science) A video conference is an interactive meeting between two or more locations, facilitated by simultaneous two-way video and audio transmissions. People in a video conference, also known as participants, join these video conferences for business and recreational purposes. In a typical video conference, we should properly identify and authenticate every participant in the video conference, if information discussed during the video conference is confidential. This preve...

  19. High resolution, high frame rate video technology development plan and the near-term system conceptual design

    Science.gov (United States)

    Ziemke, Robert A.

    1990-01-01

    The objective of the High Resolution, High Frame Rate Video Technology (HHVT) development effort is to provide technology advancements to remove constraints on the amount of high speed, detailed optical data recorded and transmitted for microgravity science and application experiments. These advancements will enable the development of video systems capable of high resolution, high frame rate video data recording, processing, and transmission. Techniques such as multichannel image scan, video parameter tradeoff, and the use of dual recording media were identified as methods of making the most efficient use of the near-term technology.

  20. Development of a Web-Based Video Direct e-Commerce System of Agricultural Products

    OpenAIRE

    Lee, Kang Oh; Nakaji, Kei; 中司, 敬

    2011-01-01

    A web-based video direct e-commerce system was developed to solve the problems in the internet shopping and to increase trust in safety and quality of agricultural products from consumers. We found that the newly developed e-commerce system could overcome demerits of the internet shopping and give consumers same effects as purchasing products offline. Producers could have opportunities to explain products and to talk to customers and get increased income because of maintaining a certain numbe...

  1. Architecture and protocol of a semantic system designed for video tagging with sensor data in mobile devices.

    Science.gov (United States)

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  2. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    Science.gov (United States)

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753

  3. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    Directory of Open Access Journals (Sweden)

    Alvaro Suarez

    2012-02-01

    Full Text Available Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  4. Portable wireless power transmission system for video capsule endoscopy.

    Science.gov (United States)

    Zhiwei, Jia; Guozheng, Yan; Bingquan, Zhu

    2014-10-01

    Wireless power transmission is considered a practical way of overcoming the power shortage of wireless capsule endoscopy (VCE). However, most patients cannot tolerate the long hours of lying in a fixed transmitting coil during diagnosis. To develop a portable wireless power transmission system for VCE, a compact transmitting coil and a portable inverter circuit driven by rechargeable batteries are proposed. The couple coils, optimized considering the stability and safety conditions, are 28 turns of transmitting coil and six strands of receiving coil. The driven circuit is designed according to the portable principle. Experiments show that the integrated system could continuously supply power to a dual-head VCE for more than 8 h at a frame rate of 30 frames per second with resolution of 320 × 240. The portable VCE exhibits potential for clinical applications, but requires further improvement and tests.

  5. System and method for image registration of multiple video streams

    Energy Technology Data Exchange (ETDEWEB)

    Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton

    2018-02-06

    Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.

  6. The Educational Efficacy of Distinct Information Delivery Systems in Modified Video Games

    Science.gov (United States)

    Moshirnia, Andrew; Israel, Maya

    2010-01-01

    Despite the increasing popularity of many commercial video games, this popularity is not shared by educational video games. Modified video games, however, can bridge the gap in quality between commercial and education video games by embedding educational content into popular commercial video games. This study examined how different information…

  7. Research on the video detection device in the invisible part of stay cable anchorage system

    Science.gov (United States)

    Cai, Lin; Deng, Nianchun; Xiao, Zexin

    2012-11-01

    The cables in anchorage zone of cable-stayed bridge are hidden within the embedded pipe, which leads to the difficulty for detecting the damage of the cables with visual inspection. We have built a detection device based on high-resolution video capture, realized the distance observing of invisible segment of stay cable and damage detection of outer surface of cable in the small volume. The system mainly consists of optical stents and precision mechanical support device, optical imaging system, lighting source, drived motor control and IP camera video capture system. The principal innovations of the device are ⑴A set of telescope objectives with three different focal lengths are designed and used in different distances of the monitors by means of converter. ⑵Lens system is far separated with lighting system, so that the imaging optical path could effectively avoid the harsh environment which would be in the invisible part of cables. The practice shows that the device not only can collect the clear surveillance video images of outer surface of cable effectively, but also has a broad application prospect in security warning of prestressed structures.

  8. Galatea ¬â€?An Interactive Computer Graphics System For Movie And Video Analysis

    Science.gov (United States)

    Potel, Michael J.; MacKay, Steven A.; Sayre, Richard E.

    1983-03-01

    Extracting quantitative information from movie film and video recordings has always been a difficult process. The Galatea motion analysis system represents an application of some powerful interactive computer graphics capabilities to this problem. A minicomputer is interfaced to a stop-motion projector, a data tablet, and real-time display equipment. An analyst views a film and uses the data tablet to track a moving position of interest. Simultaneously, a moving point is displayed in an animated computer graphics image that is synchronized with the film as it runs. Using a projection CRT and a series of mirrors, this image is superimposed on the film image on a large front screen. Thus, the graphics point lies on top of the point of interest in the film and moves with it at cine rates. All previously entered points can be displayed simultaneously in this way, which is extremely useful in checking the accuracy of the entries and in avoiding omission and duplication of points. Furthermore, the moving points can be connected into moving stick figures, so that such representations can be transcribed directly from film. There are many other tools in the system for entering outlines, measuring time intervals, and the like. The system is equivalent to "dynamic tracing paper" because it is used as though it were tracing paper that can keep up with running movie film. We have applied this system to a variety of problems in cell biology, cardiology, biomechanics, and anatomy. We have also extended the system using photogrammetric techniques to support entry of three-dimensional moving points from two (or more) films taken simultaneously from different perspective views. We are also presently constructing a second, lower-cost, microcomputer-based system for motion analysis in video, using digital graphics and video mixing to achieve the graphics overlay for any composite video source image.

  9. A System to Generate SignWriting for Video Tracks Enhancing Accessibility of Deaf People

    Directory of Open Access Journals (Sweden)

    Elena Verdú

    2017-12-01

    Full Text Available Video content has increased much on the Internet during last years. In spite of the efforts of different organizations and governments to increase the accessibility of websites, most multimedia content on the Internet is not accessible. This paper describes a system that contributes to make multimedia content more accessible on the Web, by automatically translating subtitles in oral language to SignWriting, a way of writing Sign Language. This system extends the functionality of a general web platform that can provide accessible web content for different needs. This platform has a core component that automatically converts any web page to a web page compliant with level AA of WAI guidelines. Around this core component, different adapters complete the conversion according to the needs of specific users. One adapter is the Deaf People Accessibility Adapter, which provides accessible web content for the Deaf, based on SignWritting. Functionality of this adapter has been extended with the video subtitle translator system. A first prototype of this system has been tested through different methods including usability and accessibility tests and results show that this tool can enhance the accessibility of video content available on the Web for Deaf people.

  10. The high resolution video capture system on the alcator C-Mod tokamak

    Science.gov (United States)

    Allen, A. J.; Terry, J. L.; Garnier, D.; Stillerman, J. A.; Wurden, G. A.

    1997-01-01

    A new system for routine digitization of video images is presently operating on the Alcator C-Mod tokamak. The PC-based system features high resolution video capture, storage, and retrieval. The captured images are stored temporarily on the PC, but are eventually written to CD. Video is captured from one of five filtered RS-170 CCD cameras at 30 frames per second (fps) with 640×480 pixel resolution. In addition, the system can digitize the output from a filtered Kodak Ektapro EM Digital Camera which captures images at 1000 fps with 239×192 resolution. Present views of this set of cameras include a wide angle and a tangential view of the plasma, two high resolution views of gas puff capillaries embedded in the plasma facing components, and a view of ablating, high speed Li pellets. The system is being used to study (1) the structure and location of visible emissions (including MARFEs) from the main plasma and divertor, (2) asymmetries in gas puff plumes due to flows in the scrape-off layer (SOL), and (3) the tilt and cigar-shaped spatial structure of the Li pellet ablation cloud.

  11. A video-based system for hand-driven stop-motion animation.

    Science.gov (United States)

    Han, Xiaoguang; Fu, Hongbo; Zheng, Hanlin; Liu, Ligang; Wang, Jue

    2013-01-01

    Stop-motion is a well-established animation technique but is often laborious and requires craft skills. A new video-based system can animate the vast majority of everyday objects in stop-motion style, more flexibly and intuitively. Animators can perform and capture motions continuously instead of breaking them into increments and shooting one still picture per increment. More important, the system permits direct hand manipulation without resorting to rigs, achieving more natural object control for beginners. The system's key component is two-phase keyframe-based capturing and processing, assisted by computer vision techniques. With this system, even amateurs can generate high-quality stop-motion animations.

  12. A Video Camera Road Sign System of the Early Warning from Collision with the Wild Animals

    Directory of Open Access Journals (Sweden)

    Matuska Slavomir

    2016-05-01

    Full Text Available This paper proposes a camera road sign system of the early warning, which can help to avoid from vehicle collision with the wild animals. The system consists of camera modules placed down the particularly chosen route and the intelligent road signs. The camera module consists of the camera device and the computing unit. The video stream is captured from video camera using computing unit. Then the algorithms of object detection are deployed. Afterwards, the machine learning algorithms will be used to classify the moving objects. If the moving object is classified as animal and this animal can be dangerous for safety of the vehicle, warning will be displayed on the intelligent road sings.

  13. A digital underwater video camera system for aquatic research in regulated rivers

    Science.gov (United States)

    Martin, Benjamin M.; Irwin, Elise R.

    2010-01-01

    We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

  14. Quantification of Urine Elimination Behaviors in Cats with a Video Recording System

    OpenAIRE

    R. Dulaney, D.; Hopfensperger, M.; Malinowski, R.; Hauptman, J.; Kruger, J M

    2017-01-01

    Background Urinary disorders in cats often require subjective caregiver quantification of clinical signs to establish a diagnosis and monitor therapeutic outcomes. Objective To investigate use of a video recording system (VRS) to better assess and quantify urination behaviors in cats. Animals Eleven healthy cats and 8 cats with disorders potentially associated with abnormal urination patterns. Methods Prospective study design. Litter box urination behaviors were quantified with a VRS for 14 d...

  15. Overview of the software for the Telemation/Sandia unattended video surveillance system

    Energy Technology Data Exchange (ETDEWEB)

    Merillat, P.D.

    1979-10-01

    A microprocessor has been used to provide the major control functions in the Telemation/Sandia unattended video surveillance system. The software in the microprocessor provides control of the various hardware components and provides the capability of interactive communications with the operator. This document, in conjunction with the commented source listing, defines the philosophy and function of the software. It is assumed that the reader is familiar with the RCA 1802 COSMAC microprocessor and has a reasonable computer science background.

  16. Cloud-Based Video Monitoring System Applied in Control of Diseases and Pests in Orchards

    OpenAIRE

    Xia, Xue; Qiu, Yun; Hu, Lin; Fan, Jingchao; Guo, Xiuming; Zhou, Guomin

    2015-01-01

    International audience; As the proposition of the ‘Internet plus’ concept and speedy progress of new media technology, traditional business have been increasingly shared in the development fruits of the informatization and the networking. Proceeding from the real plant protection demands, the construction of a cloud-based video monitoring system that surveillances diseases and pests in apple orchards has been discussed, aiming to solve the lack of timeliness and comprehensiveness in the contr...

  17. Evaluation of a video-based head motion tracking system for dedicated brain PET

    Science.gov (United States)

    Anishchenko, S.; Beylin, D.; Stepanov, P.; Stepanov, A.; Weinberg, I. N.; Schaeffer, S.; Zavarzin, V.; Shaposhnikov, D.; Smith, M. F.

    2015-03-01

    Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient's head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.

  18. Accurate distortion estimation and optimal bandwidth allocation for scalable H.264 video transmission over MIMO systems.

    Science.gov (United States)

    Jubran, Mohammad K; Bansal, Manu; Kondi, Lisimachos P; Grover, Rohan

    2009-01-01

    In this paper, we propose an optimal strategy for the transmission of scalable video over packet-based multiple-input multiple-output (MIMO) systems. The scalable extension of H.264/AVC that provides a combined temporal, quality and spatial scalability is used. For given channel conditions, we develop a method for the estimation of the distortion of the received video and propose different error concealment schemes. We show the accuracy of our distortion estimation algorithm in comparison with simulated wireless video transmission with packet errors. In the proposed MIMO system, we employ orthogonal space-time block codes (O-STBC) that guarantee independent transmission of different symbols within the block code. In the proposed constrained bandwidth allocation framework, we use the estimated end-to-end decoder distortion to optimally select the application layer parameters, i.e., quantization parameter (QP) and group of pictures (GOP) size, and physical layer parameters, i.e., rate-compatible turbo (RCPT) code rate and symbol constellation. Results show the substantial performance gain by using different symbol constellations across the scalable layers as compared to a fixed constellation.

  19. Objective Video Quality Assessment of Direct Recording and Datavideo HDR-40 Recording System

    Directory of Open Access Journals (Sweden)

    Nofia Andreana

    2016-10-01

    Full Text Available Digital Video Recorder (DVR is a digital video recorder with hard drive storage media. When the capacity of the hard disk runs out. It will provide information to users and if there is no response, it will be overwritten automatically and the data will be lost. The main focus of this paper is to enable recording directly connected to a computer editor. The output of both systems (DVR and Direct Recording will be compared with an objective assessment using the Mean Square Error (MSE and Peak Signal to Noise Ratio (PSNR parameter. The results showed that the average value of MSE Direct Recording dB 797.8556108, 137.4346100 DVR MSE dB and the average value of PSNR Direct Recording and DVR PSNR dB 19.5942333 27.0914258 dB. This indicates that the DVR has a much better output quality than Direct Recording.

  20. A video/IMU hybrid system for movement estimation in infants.

    Science.gov (United States)

    Machireddy, Archana; van Santen, Jan; Wilson, Jenny L; Myers, Julianne; Hadders-Algra, Mijna; Xubo Song

    2017-07-01

    Cerebral palsy is a non-progressive neurological disorder occurring in early childhood affecting body movement and muscle control. Early identification can help improve outcome through therapy-based interventions. Absence of so-called "fidgety movements" is a strong predictor of cerebral palsy. Currently, infant limb movements captured through either video cameras or accelerometers are analyzed to identify fidgety movements. However both modalities have their limitations. Video cameras do not have the high temporal resolution needed to capture subtle movements. Accelerometers have low spatial resolution and capture only relative movement. In order to overcome these limitations, we have developed a system to combine measurements from both camera and sensors to estimate the true underlying motion using extended Kalman filter. The estimated motion achieved 84% classification accuracy in identifying fidgety movements using Support Vector Machine.

  1. A TBB-CUDA Implementation for Background Removal in a Video-Based Fire Detection System

    Directory of Open Access Journals (Sweden)

    Fan Wang

    2014-01-01

    Full Text Available This paper presents a parallel TBB-CUDA implementation for the acceleration of single-Gaussian distribution model, which is effective for background removal in the video-based fire detection system. In this framework, TBB mainly deals with initializing work of the estimated Gaussian model running on CPU, and CUDA performs background removal and adaption of the model running on GPU. This implementation can exploit the combined computation power of TBB-CUDA, which can be applied to the real-time environment. Over 220 video sequences are utilized in the experiments. The experimental results illustrate that TBB+CUDA can achieve a higher speedup than both TBB and CUDA. The proposed framework can effectively overcome the disadvantages of limited memory bandwidth and few execution units of CPU, and it reduces data transfer latency and memory latency between CPU and GPU.

  2. New architecture for MPEG video streaming system with backward playback support.

    Science.gov (United States)

    Fu, Chang-Hong; Chan, Yui-Lam; Ip, Tak-Piu; Siu, Wan-Chi

    2007-09-01

    MPEG digital video is becoming ubiquitous for video storage and communications. It is often desirable to perform various video cassette recording (VCR) functions such as backward playback in MPEG videos. However, the predictive processing techniques employed in MPEG severely complicate the backward-play operation. A straightforward implementation of backward playback is to transmit and decode the whole group-of-picture (GOP), store all the decoded frames in the decoder buffer, and play the decoded frames in reverse order. This approach requires a significant buffer in the decoder, which depends on the GOP size, to store the decoded frames. This approach could not be possible in a severely constrained memory requirement. Another alternative is to decode the GOP up to the current frame to be displayed, and then go back to decode the GOP again up to the next frame to be displayed. This approach does not need the huge buffer, but requires much higher bandwidth of the network and complexity of the decoder. In this paper, we propose a macroblock-based algorithm for an efficient implementation of the MPEG video streaming system to provide backward playback over a network with the minimal requirements on the network bandwidth and the decoder complexity. The proposed algorithm classifies macroblocks in the requested frame into backward macroblocks (BMBs) and forward/backward macroblocks (FBMBs). Two macroblock-based techniques are used to manipulate different types of macroblocks in the compressed domain and the server then sends the processed macroblocks to the client machine. For BMBs, a VLC-domain technique is adopted to reduce the number of macroblocks that need to be decoded by the decoder and the number of bits that need to be sent over the network in the backward-play operation. We then propose a newly mixed VLC/DCT-domain technique to handle FBMBs in order to further reduce the computational complexity of the decoder. With these compressed-domain techniques, the

  3. Programming Cell/BE and GPUs systems for real-time video encoding

    Science.gov (United States)

    Momcilovic, Svetislav; Sousa, Leonel

    2010-05-01

    In this work scalable parallelization methods for computing in real-time the H.264/AVC on multi-cores platforms, such as the most recent Graphical Processing Units (GPUs) and Cell Broadband Engine (Cell/BE), are proposed. By applying the Amdahl's law, the most demanding parts of the video coder were identified and the Single Program Multiple Data and Single Instruction Multiple Data approaches are adopted for achieving real-time processing. In particular, video motion estimation and in-loop deblocking filtering were offloaded to be executed in parallel on either GPUs or Cell/BE Synergistic Processor Elements (SPEs). The limits and advantages of these two architectures when dealing with typical video coding problems, such as data dependencies and large input data are demonstrated. We propose techniques to minimize the impact of branch divergences and branch misprediction, data misalignment, conflicts and non-coalesced memory accesses. Moreover, data dependencies and memory size restrictions are taken into account in order to minimize synchronization and communication time overheads, and to achieve the optimal workload balance given the available multiple cores. Data reusing technique is extensively applied for reducing communication overhead, in order to achieve the maximum processing speedup. Experimental results show that real time H.264/AVC is achieved in both systems by computing 30 frames per second, with a resolution of 720×576 pixels, when full-pixel motion estimation is applied over 5 reference frames and 32×32 search area. When quarter-pixel motion estimation is adopted, real time video coding is obtained on GPU for larger search area and on Cell/BE for smaller search areas.

  4. [A new laser scan system for video ophthalmoscopy. Initial clinical experiences also in relation to digital image processing].

    Science.gov (United States)

    Fabian, E; Mertz, M; Hofmann, H; Wertheimer, R; Foos, C

    1990-06-01

    The clinical advantages of a scanning laser ophthalmoscope (SLO) and video imaging of fundus pictures are described. Image quality (contrast, depth of field) and imaging possibilities (confocal stop) are assessed. Imaging with different lasers (argon, He-Ne) and changes in imaging rendered possible by confocal alignment of the imaging optics are discussed. Hard copies from video images are still of inferior quality compared to fundus photographs. Methods of direct processing and retrieval of digitally stored SLO video fundus images are illustrated by examples. Modifications for a definitive laser scanning system - in regard to the field of view and the quality of hard copies - are proposed.

  5. Feasibility of an Exoskeleton-Based Interactive Video Game System for Upper Extremity Burn Contractures.

    Science.gov (United States)

    Schneider, Jeffrey C; Ozsecen, Muzaffer Y; Muraoka, Nicholas K; Mancinelli, Chiara; Della Croce, Ugo; Ryan, Colleen M; Bonato, Paolo

    2016-05-01

    Burn contractures are common and difficult to treat. Measuring continuous joint motion would inform the assessment of contracture interventions; however, it is not standard clinical practice. This study examines use of an interactive gaming system to measure continuous joint motion data. To assess the usability of an exoskeleton-based interactive gaming system in the rehabilitation of upper extremity burn contractures. Feasibility study. Eight subjects with a history of burn injury and upper extremity contractures were recruited from the outpatient clinic of a regional inpatient rehabilitation facility. Subjects used an exoskeleton-based interactive gaming system to play 4 different video games. Continuous joint motion data were collected at the shoulder and elbow during game play. Visual analog scale for engagement, difficulty and comfort. Angular range of motion by subject, joint, and game. The study population had an age of 43 ± 16 (mean ± standard deviation) years and total body surface area burned range of 10%-90%. Subjects reported satisfactory levels of enjoyment, comfort, and difficulty. Continuous joint motion data demonstrated variable characteristics by subject, plane of motion, and game. This study demonstrates the feasibility of use of an exoskeleton-based interactive gaming system in the burn population. Future studies are needed that examine the efficacy of tailoring interactive video games to the specific joint impairments of burn survivors. Copyright © 2016 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  6. Full high-definition real-time depth estimation for three-dimensional video system

    Science.gov (United States)

    Li, Hejian; An, Ping; Zhang, Zhaoyang

    2014-09-01

    Three-dimensional (3-D) video brings people strong visual perspective experience, but also introduces large data and complexity processing problems. The depth estimation algorithm is especially complex and it is an obstacle for real-time system implementation. Meanwhile, high-resolution depth maps are necessary to provide a good image quality on autostereoscopic displays which deliver stereo content without the need for 3-D glasses. This paper presents a hardware implementation of a full high-definition (HD) depth estimation system that is capable of processing full HD resolution images with a maximum processing speed of 125 fps and a disparity search range of 240 pixels. The proposed field-programmable gate array (FPGA)-based architecture implements a fusion strategy matching algorithm for efficiency design. The system performs with high efficiency and stability by using a full pipeline design, multiresolution processing, synchronizers which avoid clock domain crossing problems, efficient memory management, etc. The implementation can be included in the video systems for live 3-D television applications and can be used as an independent hardware module in low-power integrated applications.

  7. Video-game play induces plasticity in the visual system of adults with amblyopia.

    Directory of Open Access Journals (Sweden)

    Roger W Li

    2011-08-01

    Full Text Available UNLABELLED: Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15-61 y; visual acuity: 20/25-20/480, with no manifest ocular disease or nystagmus were recruited and allocated into three intervention groups: action videogame group (n = 10, non-action videogame group (n = 3, and crossover control group (n = 7. Our experiments show that playing video games (both action and non-action games for a short period of time (40-80 h, 2 h/d using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%, positional acuity (16%, spatial attention (37%, and stereopsis (54%. Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy, we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7% and increased processing efficiency (33%. Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia

  8. Quality Analysis of Massive High-Definition Video Streaming in Two-Tiered Embedded Camera-Sensing Systems

    OpenAIRE

    Joongheon Kim; Eun-Seok Ryu

    2014-01-01

    This paper presents the quality analysis results of high-definition video streaming in two-tiered camera sensor network applications. In the camera-sensing system, multiple cameras sense visual scenes in their target fields and transmit the video streams via IEEE 802.15.3c multigigabit wireless links. However, the wireless transmission introduces interferences to the other links. This paper analyzes the capacity degradation due to the interference impacts from the camera-sensing nodes to the ...

  9. Morphometric analysis of rat femoral vessels under a video magnification system

    Directory of Open Access Journals (Sweden)

    Rui Sergio Monteiro de Barros

    Full Text Available Abstract The right femoral vessels of 80 rats were identified and dissected. External lengths and diameters of femoral arteries and femoral veins were measured using either a microscope or a video magnification system. Findings were correlated to animals’ weights. Mean length was 14.33 mm for both femoral arteries and femoral veins, mean diameter of arteries was 0.65 mm and diameter of veins was 0.81 mm. In our sample, rats’ body weights were only correlated with the diameter of their femoral veins.

  10. Application of a full body inertial measurement system in alpine skiing: a comparison with an optical video based system.

    Science.gov (United States)

    Krüger, Andreas; Edelmann-Nusser, Jürgen

    2010-11-01

    This study aims at determining the accuracy of a full body inertial measurement system in a real skiing environment in comparison with an optical video based system. Recent studies have shown the use of inertial measurement systems for the determination of kinematical parameters in alpine skiing. However, a quantitative validation of a full body inertial measurement system for the application in alpine skiing is so far not available. For the purpose of this study, a skier performed a test-run equipped with a full body inertial measurement system in combination with a DGPS. In addition, one turn of the test-run was analyzed by an optical video based system. With respect to the analyzed angles, a maximum mean difference of 4.9° was measured. No differences in the measured angles between the inertial measurement system and the combined usage with a DGPS were found. Concerning the determination of the skier's trajectory, an additional system (e.g., DGPS) must be used. As opposed to optical methods, the main advantages of the inertial measurement system are the determination of kinematical parameters without the limitation of restricted capture volume, and small time costs for the measurement preparation and data analysis.

  11. Usability Evaluation of a Video Conferencing System in a University’s Classroom

    DEFF Research Database (Denmark)

    Khalid, Md. Saifuddin; Hossan, Md. Iqbal

    2016-01-01

    The integration of video conferencing systems (VCS) have increased significantly in the classrooms and administrative practices of higher education institutions. The VCSs discussed in the existing literature can be broadly categorized as desktop systems (e.g. Scopia), WebRTC or Real......-Time Communications (e.g. Google Hangout, Adobe Connect, Cisco WebEx, and appear.in), and dedicated (e.g. Polycom). There is a lack of empirical study on usability evaluation of the interactive systems in educational contexts. This study identifies usability errors and measures user satisfaction of a dedicated VCS......) analysis of 12 user responses results below average score. Poststudy system test by the vendor has identified cabling and setup error. Applying SUMI followed by qualitative methods might enrich evaluation outcomes....

  12. USABILITY TESTING OF JAPANESE CAPTIONS SEGMENTATION SYSTEM TO SCAFFOLD BEGINNERS TO COMPREHEND JAPANESE VIDEOS

    Directory of Open Access Journals (Sweden)

    Ya-Fei Yang

    2013-06-01

    Full Text Available A major learning difficulty of Japanese foreign language (JFL learners is the complex composition of two syllabaries, hiragana and katakana, and kanji characters adopted from logographic Chinese ones. As the number of Japanese language learners increases, computer-assisted Japanese language education gradually gains more attention. This study aimed to adopt a Japanese word segmentation system to help JFL learners overcome literacy problems. This study adopted MeCab, a Japanese morphological analyzer and part-of-speech (POS tagger, to segment Japanese texts into separate morphemes by adding spaces and to attach POS tags to each morpheme for beginners. The participants were asked to participate in three experimental activities involvingwatching two Japanese videos with general or segmented Japanese captions and complete the Nielsen’s Attributes of Usability (NAU survey and the After Scenario Questionnaire (ASQ to evaluate the usability of the learning activities. The results of the system evaluation showed that the videos with the segmented captions could increase the participants’ learning motivation and willingness to adopt the word segmentation system to learn Japanese.

  13. Study on a High Compression Processing for Video-on-Demand e-learning System

    Science.gov (United States)

    Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    The authors proposed a high-quality and small-capacity lecture-video-file creating system for distance e-learning system. Examining the feature of the lecturing scene, the authors ingeniously employ two kinds of image-capturing equipment having complementary characteristics : one is a digital video camera with a low resolution and a high frame rate, and the other is a digital still camera with a high resolution and a very low frame rate. By managing the two kinds of image-capturing equipment, and by integrating them with image processing, we can produce course materials with the greatly reduced file capacity : the course materials satisfy the requirements both for the temporal resolution to see the lecturer's point-indicating actions and for the high spatial resolution to read the small written letters. As a result of a comparative experiment, the e-lecture using the proposed system was confirmed to be more effective than an ordinary lecture from the viewpoint of educational effect.

  14. A bio-inspired system for spatio-temporal recognition in static and video imagery

    Science.gov (United States)

    Khosla, Deepak; Moore, Christopher K.; Chelian, Suhas

    2007-04-01

    This paper presents a bio-inspired method for spatio-temporal recognition in static and video imagery. It builds upon and extends our previous work on a bio-inspired Visual Attention and object Recognition System (VARS). The VARS approach locates and recognizes objects in a single frame. This work presents two extensions of VARS. The first extension is a Scene Recognition Engine (SCE) that learns to recognize spatial relationships between objects that compose a particular scene category in static imagery. This could be used for recognizing the category of a scene, e.g., office vs. kitchen scene. The second extension is the Event Recognition Engine (ERE) that recognizes spatio-temporal sequences or events in sequences. This extension uses a working memory model to recognize events and behaviors in video imagery by maintaining and recognizing ordered spatio-temporal sequences. The working memory model is based on an ARTSTORE1 neural network that combines an ART-based neural network with a cascade of sustained temporal order recurrent (STORE)1 neural networks. A series of Default ARTMAP classifiers ascribes event labels to these sequences. Our preliminary studies have shown that this extension is robust to variations in an object's motion profile. We evaluated the performance of the SCE and ERE on real datasets. The SCE module was tested on a visual scene classification task using the LabelMe2 dataset. The ERE was tested on real world video footage of vehicles and pedestrians in a street scene. Our system is able to recognize the events in this footage involving vehicles and pedestrians.

  15. An optimized video system for augmented reality in endodontics: a feasibility study.

    Science.gov (United States)

    Bruellmann, D D; Tjaden, H; Schwanecke, U; Barth, P

    2013-03-01

    We propose an augmented reality system for the reliable detection of root canals in video sequences based on a k-nearest neighbor color classification and introduce a simple geometric criterion for teeth. The new software was implemented using C++, Qt, and the image processing library OpenCV. Teeth are detected in video images to restrict the segmentation of the root canal orifices by using a k-nearest neighbor algorithm. The location of the root canal orifices were determined using Euclidean distance-based image segmentation. A set of 126 human teeth with known and verified locations of the root canal orifices was used for evaluation. The software detects root canals orifices for automatic classification of the teeth in video images and stores location and size of the found structures. Overall 287 of 305 root canals were correctly detected. The overall sensitivity was about 94 %. Classification accuracy for molars ranged from 65.0 to 81.2 % and from 85.7 to 96.7 % for premolars. The realized software shows that observations made in anatomical studies can be exploited to automate real-time detection of root canal orifices and tooth classification with a software system. Automatic storage of location, size, and orientation of the found structures with this software can be used for future anatomical studies. Thus, statistical tables with canal locations will be derived, which can improve anatomical knowledge of the teeth to alleviate root canal detection in the future. For this purpose the software is freely available at: http://www.dental-imaging.zahnmedizin.uni-mainz.de/.

  16. Reward system and temporal pole contributions to affective evaluation during a first person shooter video game

    Science.gov (United States)

    2011-01-01

    Background Violent content in video games evokes many concerns but there is little research concerning its rewarding aspects. It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes to it. We combined functional Magnetic Resonance Imaging (fMRI) with individual affect measures to address the neuronal correlates of violence in a video game. Results Thirteen male German volunteers played a first-person shooter game (Tactical Ops: Assault on Terror) during fMRI measurement. We defined success as eliminating opponents, and failure as being eliminated themselves. Affect was measured directly before and after game play using the Positive and Negative Affect Schedule (PANAS). Failure and success events evoked increased activity in visual cortex but only failure decreased activity in orbitofrontal cortex and caudate nucleus. A negative correlation between negative affect and responses to failure was evident in the right temporal pole (rTP). Conclusions The deactivation of the caudate nucleus during failure is in accordance with its role in reward-prediction error: it occurred whenever subject missed an expected reward (being eliminated rather than eliminating the opponent). We found no indication that violence events were directly rewarding for the players. We addressed subjective evaluations of affect change due to gameplay to study the reward system. Subjects reporting greater negative affect after playing the game had less rTP activity associated with failure. The rTP may therefore be involved in evaluating the failure events in a social context, to regulate the players' mood. PMID:21749711

  17. Reward system and temporal pole contributions to affective evaluation during a first person shooter video game

    Directory of Open Access Journals (Sweden)

    Weber René

    2011-07-01

    Full Text Available Abstract Background Violent content in video games evokes many concerns but there is little research concerning its rewarding aspects. It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes to it. We combined functional Magnetic Resonance Imaging (fMRI with individual affect measures to address the neuronal correlates of violence in a video game. Results Thirteen male German volunteers played a first-person shooter game (Tactical Ops: Assault on Terror during fMRI measurement. We defined success as eliminating opponents, and failure as being eliminated themselves. Affect was measured directly before and after game play using the Positive and Negative Affect Schedule (PANAS. Failure and success events evoked increased activity in visual cortex but only failure decreased activity in orbitofrontal cortex and caudate nucleus. A negative correlation between negative affect and responses to failure was evident in the right temporal pole (rTP. Conclusions The deactivation of the caudate nucleus during failure is in accordance with its role in reward-prediction error: it occurred whenever subject missed an expected reward (being eliminated rather than eliminating the opponent. We found no indication that violence events were directly rewarding for the players. We addressed subjective evaluations of affect change due to gameplay to study the reward system. Subjects reporting greater negative affect after playing the game had less rTP activity associated with failure. The rTP may therefore be involved in evaluating the failure events in a social context, to regulate the players' mood.

  18. Reward system and temporal pole contributions to affective evaluation during a first person shooter video game.

    Science.gov (United States)

    Mathiak, Krystyna A; Klasen, Martin; Weber, René; Ackermann, Hermann; Shergill, Sukhwinder S; Mathiak, Klaus

    2011-07-12

    Violent content in video games evokes many concerns but there is little research concerning its rewarding aspects. It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes to it. We combined functional Magnetic Resonance Imaging (fMRI) with individual affect measures to address the neuronal correlates of violence in a video game. Thirteen male German volunteers played a first-person shooter game (Tactical Ops: Assault on Terror) during fMRI measurement. We defined success as eliminating opponents, and failure as being eliminated themselves. Affect was measured directly before and after game play using the Positive and Negative Affect Schedule (PANAS). Failure and success events evoked increased activity in visual cortex but only failure decreased activity in orbitofrontal cortex and caudate nucleus. A negative correlation between negative affect and responses to failure was evident in the right temporal pole (rTP). The deactivation of the caudate nucleus during failure is in accordance with its role in reward-prediction error: it occurred whenever subject missed an expected reward (being eliminated rather than eliminating the opponent). We found no indication that violence events were directly rewarding for the players. We addressed subjective evaluations of affect change due to gameplay to study the reward system. Subjects reporting greater negative affect after playing the game had less rTP activity associated with failure. The rTP may therefore be involved in evaluating the failure events in a social context, to regulate the players' mood.

  19. Three-dimensional integral television using extremely high-resolution video system with 4,000 scanning lines

    Science.gov (United States)

    Okano, Fumio; Kawakita, Masahiro; Arai, Jun; Sasaki, Hisayuki; Yamashita, Takayuki; Sato, Masahito; Suehiro, Koya; Haino, Yasuyuki

    2007-09-01

    The integral method enables observers to see 3D images like real objects. It requires extremely high resolution for both capture and display stages. We present an experimental 3D television system based on the integral method using an extremely high-resolution video system. The video system has 4,000 scanning lines using the diagonal offset method for two green channels. The number of elemental lenses in the lens array is 140 (vertical) × 182 (horizontal). The viewing zone angle is wider than 20 degrees in practice. This television system can capture 3D objects and provides full color and full parallax 3D images in real time.

  20. iTRAC : intelligent video compression for automated traffic surveillance systems.

    Science.gov (United States)

    2010-08-01

    Non-intrusive video imaging sensors are commonly used in traffic monitoring : and surveillance. For some applications it is necessary to transmit the video : data over communication links. However, due to increased requirements of : bitrate this mean...

  1. Inter-frame Collusion Attack in SS-N Video Watermarking System

    OpenAIRE

    Yaser Mohammad Taheri; Alireza Zolghadr–asli; Mehran Yazdi

    2009-01-01

    Video watermarking is usually considered as watermarking of a set of still images. In frame-by-frame watermarking approach, each video frame is seen as a single watermarked image, so collusion attack is more critical in video watermarking. If the same or redundant watermark is used for embedding in every frame of video, the watermark can be estimated and then removed by watermark estimate remodolulation (WER) attack. Also if uncorrelated watermarks are used for every frame, these watermarks c...

  2. Integral three-dimensional television using a 2000-scanning-line video system

    Science.gov (United States)

    Arai, Jun; Okui, Makoto; Yamashita, Takayuki; Okano, Fumio

    2006-03-01

    We have developed an integral three-dimensional (3-D) television that uses a 2000-scanning-line video system that can shoot and display 3-D color moving images in real time. We had previously developed an integral 3-D television that used a high-definition television system. The new system uses ˜6 times as many elemental images [160 (horizontal)×118 (vertical) elemental images] arranged at ˜1.5 times the density to improve further the picture quality of the reconstructed image. Through comparison an image near the lens array can be reconstructed at ˜1.9 times the spatial frequency, and the viewing angle is ˜1.5 times as wide.

  3. Real-time video-on-demand system based on distributed servers and an agent-oriented application

    Science.gov (United States)

    Takahata, Minoru; Uemori, Akira; Nakano, Hirotaka

    1996-02-01

    This video-on-demand service is constructed of distributed servers, including video servers that supply real-time MPEG-1 video & audio, real-time MPEG-1 encoders, and an application server that supplies additional text information and agents for retrieval. This system has three distinctive features that enable it to provide multi viewpoint access to real-time visual information: (1) The terminal application uses an agent-oriented approach that allows the system to be easily extended. The agents are implemented using a commercial authoring tool plus additional objects that communicate with the video servers by using TCP/IP protocols. (2) The application server manages the agents, automatically processes text information and is able to handle unexpected alterations of the contents. (3) The distributed system has an economical, flexible architecture to store long video streams. The real-time MPEG-1 encoder system is based on multi channel phase-shifting processing. We also describe a practical application of this system, a prototype TV-on-demand service called TVOD. This provides access to broadcast television programs for the previous week.

  4. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    Energy Technology Data Exchange (ETDEWEB)

    Pardini, A.F.

    1998-01-27

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing.

  5. The Effect of Motion Artifacts on Near-Infrared Spectroscopy (NIRS Data and Proposal of a Video-NIRS System

    Directory of Open Access Journals (Sweden)

    Masayuki Satoh

    2017-11-01

    Full Text Available Aims: The aims of this study were (1 to investigate the influence of physical movement on near-infrared spectroscopy (NIRS data, (2 to establish a video-NIRS system which simultaneously records NIRS data and the subject’s movement, and (3 to measure the oxygenated hemoglobin (oxy-Hb concentration change (Δoxy-Hb during a word fluency (WF task. Experiment 1: In 5 healthy volunteers, we measured the oxy-Hb and deoxygenated hemoglobin (deoxy-Hb concentrations during 11 kinds of facial, head, and extremity movements. The probes were set in the bilateral frontal regions. The deoxy-Hb concentration was increased in 85% of the measurements. Experiment 2: Using a pillow on the backrest of the chair, we established the video-NIRS system with data acquisition and video capture software. One hundred and seventy-six elderly people performed the WF task. The deoxy-Hb concentration was decreased in 167 subjects (95%. Experiment 3: Using the video-NIRS system, we measured the Δoxy-Hb, and compared it with the results of the WF task. Δoxy-Hb was significantly correlated with the number of words. Conclusion: Like the blood oxygen level-dependent imaging effect in functional MRI, the deoxy-Hb concentration will decrease if the data correctly reflect the change in neural activity. The video-NIRS system might be useful to collect NIRS data by recording the waveforms and the subject’s appearance simultaneously.

  6. Design and Implementation of Mobile Car with Wireless Video Monitoring System Based on STC89C52

    Directory of Open Access Journals (Sweden)

    Yang Hong

    2014-05-01

    Full Text Available With the rapid development of wireless networks and image acquisition technology, wireless video transmission technology has been widely applied in various communication systems. The traditional video monitoring technology is restricted by some conditions such as layout, environmental, the relatively large volume, cost, and so on. In view of this problem, this paper proposes a method that the mobile car can be equipped with wireless video monitoring system. The mobile car which has some functions such as detection, video acquisition and wireless data transmission is developed based on STC89C52 Micro Control Unit (MCU and WiFi router. Firstly, information such as image, temperature and humidity is processed by the MCU and communicated with the router, and then returned by the WiFi router to the host computer phone. Secondly, control information issued by the host computer phone is received by WiFi router and sent to the MCU, and then the MCU sends relevant instructions. Lastly, the wireless transmission of video images and the remote control of the car are realized. The results prove that the system has some features such as simple operation, high stability, fast response, low cost, strong flexibility, widely application, and so on. The system has certain practical value and popularization value.

  7. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    Science.gov (United States)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  8. ΤND: a thyroid nodule detection system for analysis of ultrasound images and videos.

    Science.gov (United States)

    Keramidas, Eystratios G; Maroulis, Dimitris; Iakovidis, Dimitris K

    2012-06-01

    In this paper, we present a computer-aided-diagnosis (CAD) system prototype, named TND (Thyroid Nodule Detector), for the detection of nodular tissue in ultrasound (US) thyroid images and videos acquired during thyroid US examinations. The proposed system incorporates an original methodology that involves a novel algorithm for automatic definition of the boundaries of the thyroid gland, and a novel approach for the extraction of noise resilient image features effectively representing the textural and the echogenic properties of the thyroid tissue. Through extensive experimental evaluation on real thyroid US data, its accuracy in thyroid nodule detection has been estimated to exceed 95%. These results attest to the feasibility of the clinical application of TND, for the provision of a second more objective opinion to the radiologists by exploiting image evidences.

  9. Modification and Validation of an Automotive Data Processing Unit, Compessed Video System, and Communications Equipment

    Energy Technology Data Exchange (ETDEWEB)

    Carter, R.J.

    1997-04-01

    The primary purpose of the "modification and validation of an automotive data processing unit (DPU), compressed video system, and communications equipment" cooperative research and development agreement (CRADA) was to modify and validate both hardware and software, developed by Scientific Atlanta, Incorporated (S-A) for defense applications (e.g., rotary-wing airplanes), for the commercial sector surface transportation domain (i.e., automobiles and trucks). S-A also furnished a state-of-the-art compressed video digital storage and retrieval system (CVDSRS), and off-the-shelf data storage and transmission equipment to support the data acquisition system for crash avoidance research (DASCAR) project conducted by Oak Ridge National Laboratory (ORNL). In turn, S-A received access to hardware and technology related to DASCAR. DASCAR was subsequently removed completely and installation was repeated a number of times to gain an accurate idea of complete installation, operation, and removal of DASCAR. Upon satisfactory completion of the DASCAR construction and preliminary shakedown, ORNL provided NHTSA with an operational demonstration of DASCAR at their East Liberty, OH test facility. The demonstration included an on-the-road demonstration of the entire data acquisition system using NHTSA'S test track. In addition, the demonstration also consisted of a briefing, containing the following: ORNL generated a plan for validating the prototype data acquisition system with regard to: removal of DASCAR from an existing vehicle, and installation and calibration in other vehicles; reliability of the sensors and systems; data collection and transmission process (data integrity); impact on the drivability of the vehicle and obtrusiveness of the system to the driver; data analysis procedures; conspicuousness of the vehicle to other drivers; and DASCAR installation and removal training and documentation. In order to identify any operational problems not captured by the systems

  10. Remote stereoscopic video play platform for naked eyes based on the Android system

    Science.gov (United States)

    Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng

    2014-11-01

    As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.

  11. A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery

    Science.gov (United States)

    Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.

    2012-02-01

    Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.

  12. A methodology to evaluate the effect of video compression on the performance of analytics systems

    Science.gov (United States)

    Tsifouti, Anastasia; Nasralla, Moustafa M.; Razaak, Manzoor; Cope, James; Orwell, James M.; Martini, Maria G.; Sage, Kingsley

    2012-10-01

    The Image Library for Intelligent Detection Systems (i-LIDS) provides benchmark surveillance datasets for analytics systems. This paper proposes a methodology to investigate the effect of compression and frame-rate reduction, and to recommend an appropriate suite of degraded datasets for public release. The library consists of six scenarios, including Sterile Zone (SZ) and Parked Vehicle (PV), which are investigated using two different compression algorithms (H.264 and JPEG) and a number of detection systems. PV has higher spatio-temporal complexity than the SZ. Compression performance is dependent on scene content hence PV will require larger bit-streams in comparison with SZ, for any given distortion rate. The study includes both industry standard algorithms (for transmission) and CCTV recorders (for storage). CCTV recorders generally use proprietary formats, which may significantly affect the visual information. Encoding standards such as H.264 and JPEG use the Discrete Cosine Transform (DCT) technique, which introduces blocking artefacts. The H.264 compression algorithm follows a hybrid predictive coding approach to achieve high compression gains, exploiting both spatial and temporal redundancy. The highly predictive approach of H.264 may introduce more artefacts resulting in a greater effect on the performance of analytics systems than JPEG. The paper describes the two main components of the proposed methodology to measure the effect of degradation on analytics performance. Firstly, the standard tests, using the `f-measure' to evaluate the performance on a range of degraded video sets. Secondly, the characterisation of the datasets, using quantification of scene features, defined using image processing techniques. This characterization permits an analysis of the points of failure introduced by the video degradation.

  13. USING A DIGITAL VIDEO CAMERA AS THE SMART SENSOR OF THE SYSTEM FOR AUTOMATIC PROCESS CONTROL OF GRANULAR FODDER MOLDING

    Directory of Open Access Journals (Sweden)

    M. M. Blagoveshchenskaya

    2014-01-01

    Full Text Available Summary. The most important operation of granular mixed fodder production is molding process. Properties of granular mixed fodder are defined during this process. They determine the process of production and final product quality. The possibility of digital video camera usage as intellectual sensor for control system in process of production is analyzed in the article. The developed parametric model of the process of bundles molding from granular fodder mass is presented in the paper. Dynamic characteristics of the molding process were determined. A mathematical model of motion of bundle of granular fodder mass after matrix holes was developed. The developed mathematical model of the automatic control system (ACS with the use of etalon video frame as the set point in the MATLAB software environment was shown. As a parameter of the bundles molding process it is proposed to use the value of the specific area defined in the mathematical treatment of the video frame. The algorithms of the programs to determine the changes in structural and mechanical properties of the feed mass in video frames images were developed. Digital video shooting of various modes of the molding machine was carried out and after the mathematical processing of video the transfer functions for use as a change of adjustable parameters of the specific area were determined. Structural and functional diagrams of the system of regulation of the food bundles molding process with the use of digital camcorders were built and analyzed. Based on the solution of the equations of fluid dynamics mathematical model of bundle motion after leaving the hole matrix was obtained. In addition to its viscosity, creep property was considered that is characteristic of the feed mass. The mathematical model ACS of the bundles molding process allowing to investigate transient processes which occur in the control system that uses a digital video camera as the smart sensor was developed in Simulink

  14. Internet Protocol Display Sharing Solution for Mission Control Center Video System

    Science.gov (United States)

    Brown, Michael A.

    2009-01-01

    With the advent of broadcast television as a constant source of information throughout the NASA manned space flight Mission Control Center (MCC) at the Johnson Space Center (JSC), the current Video Transport System (VTS) characteristics provides the ability to visually enhance real-time applications as a broadcast channel that decision making flight controllers come to rely on, but can be difficult to maintain and costly. The Operations Technology Facility (OTF) of the Mission Operations Facility Division (MOFD) has been tasked to provide insight to new innovative technological solutions for the MCC environment focusing on alternative architectures for a VTS. New technology will be provided to enable sharing of all imagery from one specific computer display, better known as Display Sharing (DS), to other computer displays and display systems such as; large projector systems, flight control rooms, and back supporting rooms throughout the facilities and other offsite centers using IP networks. It has been stated that Internet Protocol (IP) applications are easily readied to substitute for the current visual architecture, but quality and speed may need to be forfeited for reducing cost and maintainability. Although the IP infrastructure can support many technologies, the simple task of sharing ones computer display can be rather clumsy and difficult to configure and manage to the many operators and products. The DS process shall invest in collectively automating the sharing of images while focusing on such characteristics as; managing bandwidth, encrypting security measures, synchronizing disconnections from loss of signal / loss of acquisitions, performance latency, and provide functions like, scalability, multi-sharing, ease of initial integration / sustained configuration, integration with video adjustments packages, collaborative tools, host / recipient controllability, and the utmost paramount priority, an enterprise solution that provides ownership to the whole

  15. Transmission of compressed video

    Science.gov (United States)

    Pasch, H. L.

    1990-09-01

    An overview of video coding is presented. The aim is not to give a technical summary of possible coding techniques, but to address subjects related to video compression in general and to the transmission of compressed video in more detail. Bit rate reduction is in general possible by removing redundant information; removing information the eye does not use anyway; and reducing the quality of the video. The codecs which are used for reducing the bit rate, can be divided into two groups: Constant Bit rate Codecs (CBC's), which keep the bit rate constant, but vary the video quality; and Variable Bit rate Codecs (VBC's), which keep the video quality constant by varying the bit rate. VBC's can be in general reach a higher video quality than CBC's using less bandwidth, but need a transmission system that allows the bandwidth of a connection to fluctuate in time. The current and the next generation of the PSTN does not allow this; ATM might. There are several factors which influence the quality of video: the bit error rate of the transmission channel, slip rate, packet loss rate/packet insertion rate, end-to-end delay, phase shift between voice and video, and bit rate. Based on the bit rate of the coded video, the following classification of coded video can be made: High Definition Television (HDTV); Broadcast Quality Television (BQTV); video conferencing; and video telephony. The properties of these classes are given. The video conferencing and video telephony equipment available now and in the next few years can be divided into three categories: conforming to 1984 CCITT standard for video conferencing; conforming to 1988 CCITT standard; and conforming to no standard.

  16. Development of a cell phone-based video streaming system for persons with early stage Alzheimer's disease.

    Science.gov (United States)

    Donnelly, Mark P; Nugent, Chris D; Craig, David; Passmore, Peter; Mulvenna, Maurice

    2008-01-01

    The current paper presents details regarding the early developments of a memory prompt solution for persons with early dementia. Using everyday technology, in the form of a cell-phone, video reminders are delivered to assist with daily activities. The proposed CPVS system will permit carers to record and schedule video reminders remotely using a standard personal computer and web cam. It is the aim of the three year project that through the frequent delivery of helpful video reminders that a 'virtual carer' will be present with the person with dementia at all times. The first prototype of the system has been fully implemented with the first field trial scheduled to take place in May 2008. Initially, only three patient carer dyads will be involved, however, the second field trial aims to involve 30 dyads in the study. Details of the first prototype and the methods of evaluation are presented herein.

  17. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  18. Verification of Strength of the Welded Joints by using of the Aramis Video System

    Directory of Open Access Journals (Sweden)

    Pała Tadeusz

    2017-03-01

    Full Text Available In the paper are presented the results of strength analysis for the two types of the welded joints made according to conventional and laser technologies of high-strength steel S960QC. The hardness distributions, tensile properties and fracture toughness were determined for the weld material and heat affect zone material for both types of the welded joints. Tests results shown on advantage the laser welded joints in comparison to the convention ones. Tensile properties and fracture toughness in all areas of the laser joints have a higher level than in the conventional one. The heat affect zone of the conventional welded joints is a weakness area, where the tensile properties are lower in comparison to the base material. Verification of the tensile tests, which carried out by using the Aramis video system, confirmed this assumption. The highest level of strains was observed in HAZ material and the destruction process occurred also in HAZ of the conventional welded joint.

  19. Gaming the System: Video Games as a Theoretical Framework for Instructional Design

    CERN Document Server

    Beatty, Ian D

    2014-01-01

    In order to facilitate analyzing video games as learning systems and instructional designs as games, we present a theoretical framework that integrates ideas from a broad range of literature. The framework describes games in terms of four layers, all sharing similar structural elements and dynamics: a micro-level game focused on immediate problem-solving and skill development, a macro-level game focused on the experience of the game world and story and identity development, and two meta-level games focused on building or modifying the game and on social interactions around it. Each layer casts gameplay as a co-construction of the game and the player, and contains three dynamical feedback loops: an exploratory learning loop, an intrinsic motivation loop, and an identity loop.

  20. Video and thermal imaging system for monitoring interiors of high temperature reaction vessels

    Science.gov (United States)

    Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL

    2012-01-10

    A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.

  1. A Fuzzy Logic-Based Video Subtitle and Caption Coloring System

    Directory of Open Access Journals (Sweden)

    Mohsen Davoudi

    2012-01-01

    Full Text Available An approach has been proposed for automatic adaptive subtitle coloring using fuzzy logic-based algorithm. This system changes the color of the video subtitle/caption to “pleasant” color according to color harmony and the visual perception of the image background colors. In the fuzzy analyzer unit, using RGB histograms of background image, the R, G, and B values for the color of the subtitle/caption are computed using fixed fuzzy IF-THEN rules fully driven from the color harmony theories to satisfy complementary color and subtitle-background color harmony conditions. A real-time hardware structure has been proposed for implementation of the front-end processing unit as well as the fuzzy analyzer unit.

  2. Development of video probe system for inspection of feeder pipe support in calandria reactor

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Lee, Nam Ho; Choi, Young Soo

    2000-07-01

    There are 760 feederpipes, which they are connected to inlet/outlet of the 380 pressure tube channels on the front of the calandria, in CANDU-type Reactor of Wolsung Nuclear Power Plant. As an ISI(In-Service Inspection) and PSI (Post- Service Inspection) requirements, maintenance activities of measuring the thickness of curvilinear part of feederpipe and inspecting the feederpipe support area within calandria are needed to ensure continued reliable operation of nuclear power plant. And untrasonic probe is used to measure the thickness of curvilinear part of feederpipe, however workers are exposed to radioactivity irradiation during the measurement period. But, it is impossible to inspect feederpipe support area thoroughlv because of narrow and confined accessibility, that is, an inspection space between the pressure tube channels is less than 100mm and pipes in feederpipe support area are congested. And also, workers involved in inspecting feederpipe support area are under the jeopardy of high-level radiation exposure. Concerns about sliding home, which make the move of feederpipe connected to pressure tube channel smooth as pressure tube expands and contracts in its axial direction, stuck to feederpipe support and some of the structural components have made necessary the development of video inspection probe system with narrow and confined accessibility to observe and inspect feederpipe support area more close. Using video inspection probe system, it is possible to inspect and repair abnormality of feederpipe support connected to pressure tube channels of the calandria more accurate and quantative than naked eye. Therefore, that will do much for ensuring safety of CANDU-type nuclear power plant.

  3. Real-time video demonstration of holographic disk data storage system

    Science.gov (United States)

    Hwang, Euiseok; Yoon, Pilsang; Kim, Nakyoung; Kang, Byongbok; Kim, Kunyul; Park, Jooyoun; Park, Jongyong

    2006-06-01

    A holographic data storage prototype fully integrated with electronics for video demonstration has been developed. It can record data in several tracks of a photopolymer disk and access them arbitrarily during retrieving process from the continuously rotating disk. An embedded controller operates all of the optomechanical components of the prototype automatically and electronic parts conduct adaptive data readout of channel data up to 55 megabit per sec. For real-time video demonstration, video stream are recorded in four concentric circular tracks of the disk. Each recording spot contains about one hundred pages with angle multiplexing. The eleven minute length video data are successfully reconstructed from the prototype.

  4. Scanning laser video camera/ microscope

    Science.gov (United States)

    Wang, C. P.; Bow, R. T.

    1984-10-01

    A laser scanning system capable of scanning at standard video rate has been developed. The scanning mirrors, circuit design and system performance, as well as its applications to video cameras and ultra-violet microscopes, are discussed.

  5. TV Recommendation and Personalization Systems: Integrating Broadcast and Video On demand Services

    Directory of Open Access Journals (Sweden)

    SOARES, M.

    2014-02-01

    Full Text Available The expansion of Digital Television and the convergence between conventional broadcasting and television over IP contributed to the gradual increase of the number of available channels and on demand video content. Moreover, the dissemination of the use of mobile devices like laptops, smartphones and tablets on everyday activities resulted in a shift of the traditional television viewing paradigm from the couch to everywhere, anytime from any device. Although this new scenario enables a great improvement in viewing experiences, it also brings new challenges given the overload of information that the viewer faces. Recommendation systems stand out as a possible solution to help a watcher on the selection of the content that best fits his/her preferences. This paper describes a web based system that helps the user navigating on broadcasted and online television content by implementing recommendations based on collaborative and content based filtering. The algorithms developed estimate the similarity between items and users and predict the rating that a user would assign to a particular item (television program, movie, etc.. To enable interoperability between different systems, programs? characteristics (title, genre, actors, etc. are stored according to the TV-Anytime standard. The set of recommendations produced are presented through a Web Application that allows the user to interact with the system based on the obtained recommendations.

  6. Realization of a video-rate distributed aperture millimeter-wave imaging system using optical upconversion

    Science.gov (United States)

    Schuetz, Christopher; Martin, Richard; Dillon, Thomas; Yao, Peng; Mackrides, Daniel; Harrity, Charles; Zablocki, Alicia; Shreve, Kevin; Bonnett, James; Curt, Petersen; Prather, Dennis

    2013-05-01

    Passive imaging using millimeter waves (mmWs) has many advantages and applications in the defense and security markets. All terrestrial bodies emit mmW radiation and these wavelengths are able to penetrate smoke, fog/clouds/marine layers, and even clothing. One primary obstacle to imaging in this spectrum is that longer wavelengths require larger apertures to achieve the resolutions desired for many applications. Accordingly, lens-based focal plane systems and scanning systems tend to require large aperture optics, which increase the achievable size and weight of such systems to beyond what can be supported by many applications. To overcome this limitation, a distributed aperture detection scheme is used in which the effective aperture size can be increased without the associated volumetric increase in imager size. This distributed aperture system is realized through conversion of the received mmW energy into sidebands on an optical carrier. This conversion serves, in essence, to scale the mmW sparse aperture array signals onto a complementary optical array. The side bands are subsequently stripped from the optical carrier and recombined to provide a real time snapshot of the mmW signal. Using this technique, we have constructed a real-time, video-rate imager operating at 75 GHz. A distributed aperture consisting of 220 upconversion channels is used to realize 2.5k pixels with passive sensitivity. Details of the construction and operation of this imager as well as field testing results will be presented herein.

  7. Performance evaluation of real-time video content analysis systems in the CANDELA project

    Science.gov (United States)

    Desurmont, Xavier; Wijnhoven, Rob; Jaspers, Egbert; Caignart, Olivier; Barais, Mike; Favoreel, Wouter; Delaigle, Jean-Francois

    2005-02-01

    The CANDELA project aims at realizing a system for real-time image processing in traffic and surveillance applications. The system performs segmentation, labels the extracted blobs and tracks their movements in the scene. Performance evaluation of such a system is a major challenge since no standard methods exist and the criteria for evaluation are highly subjective. This paper proposes a performance evaluation approach for video content analysis (VCA) systems and identifies the involved research areas. For these areas we give an overview of the state-of-the-art in performance evaluation and introduce a classification into different semantic levels. The proposed evaluation approach compares the results of the VCA algorithm with a ground-truth (GT) counterpart, which contains the desired results. Both the VCA results and the ground truth comprise description files that are formatted in MPEG-7. The evaluation is required to provide an objective performance measure and a mean to choose between competitive methods. In addition, it enables algorithm developers to measure the progress of their work at the different levels in the design process. From these requirements and the state-of-the-art overview we conclude that standardization is highly desirable for which many research topics still need to be addressed.

  8. A Multitarget Tracking Video System Based on Fuzzy and Neuro-Fuzzy Techniques

    Directory of Open Access Journals (Sweden)

    Javier I. Portillo

    2005-08-01

    Full Text Available Automatic surveillance of airport surface is one of the core components of advanced surface movement, guidance, and control systems (A-SMGCS. This function is in charge of the automatic detection, identification, and tracking of all interesting targets (aircraft and relevant ground vehicles in the airport movement area. This paper presents a novel approach for object tracking based on sequences of video images. A fuzzy system has been developed to ponder update decisions both for the trajectories and shapes estimated for targets from the image regions extracted in the images. The advantages of this approach are robustness, flexibility in the design to adapt to different situations, and efficiency for operation in real time, avoiding combinatorial enumeration. Results obtained in representative ground operations show the system capabilities to solve complex scenarios and improve tracking accuracy. Finally, an automatic procedure, based on neuro-fuzzy techniques, has been applied in order to obtain a set of rules from representative examples. Validation of learned system shows the capability to learn the suitable tracker decisions.

  9. A real-time 3D video tracking system for monitoring primate groups.

    Science.gov (United States)

    Ballesta, S; Reymond, G; Pozzobon, M; Duhamel, J-R

    2014-08-30

    To date, assessing the solitary and social behaviors of laboratory primates' colonies relies on time-consuming manual scoring methods. Here, we describe a real-time multi-camera 3D tracking system developed to measure the behavior of socially-housed primates. Their positions are identified using non-invasive color markers such as plastic collars, thus allowing to also track colored objects and to measure their usage. Compared to traditional manual ethological scoring, we show that this system can reliably evaluate solitary behaviors (foraging, solitary resting, toy usage, locomotion) as well as spatial proximity with peers, which is considered as a good proxy of their social motivation. Compared to existing video-based commercial systems currently available to measure animal activity, this system offers many possibilities (real-time data, large volume coverage, multiple animal tracking) at a lower hardware cost. Quantitative behavioral data of animal groups can now be obtained automatically over very long periods of time, thus opening new perspectives in particular for studying the neuroethology of social behavior in primates. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Comparison Of Processing Time Of Different Size Of Images And Video Resolutions For Object Detection Using Fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    Yogesh Yadav

    2017-01-01

    Full Text Available Object Detection with small computation cost and processing time is a necessity in diverse domains such as traffic analysis security cameras video surveillance etc .With current advances in technology and decrease in prices of image sensors and video cameras the resolution of captured images is more than 1MP and has higher frame rates. This implies a considerable data size that needs to be processed in a very short period of time when real-time operations and data processing is needed. Real time video processing with high performance can be achieved with GPU technology. The aim of this study is to evaluate the influence of different image and video resolutions on the processing time number of objects detections and accuracy of the detected object. MOG2 algorithm is used for processing video input data with GPU module. Fuzzy interference system is used to evaluate the accuracy of number of detected object and to show the difference between CPU and GPU computing methods.

  11. 78 FR 13139 - In the Matter of Digital Video Systems, Inc., Geocom Resources, Inc., and GoldMountain...

    Science.gov (United States)

    2013-02-26

    ... From the Federal Register Online via the Government Publishing Office ] SECURITIES AND EXCHANGE COMMISSION In the Matter of Digital Video Systems, Inc., Geocom Resources, Inc., and GoldMountain Exploration... of Suspension of Trading It appears to the Securities and Exchange Commission that there is a lack of...

  12. Using Video Streaming: Setting Up a Cheap System for Distributing Information to Teachers and Students

    Science.gov (United States)

    McNeal, Thomas, Jr.; Kearns, Landon

    2005-01-01

    Video streaming can be a very useful tool for educators. It is now possible for a school?s technical specialist or classroom teacher to create a streaming server with tools that are available in many classrooms. In this article we describe how we created our video streamer using free software, older computers, and borrowed hardware. The system…

  13. Desain dan Implementasi Aplikasi Video Surveillance System Berbasis Web-SIG

    Directory of Open Access Journals (Sweden)

    I M.O. Widyantara

    2015-06-01

    Full Text Available Video surveillance system (VSS is an monitoring system based-on IP-camera. VSS implemented in live streaming and serves to observe and monitor a site remotely. Typically, IP- camera in the VSS has a management software application. However, for ad hoc applications, where the user wants to manage VSS independently, application management software has become ineffective. In the IP-camera installation spread over a large area, an administrator would be difficult to describe the location of the IP-camera. In addition, monitoring an area of IP- Camera will also become more difficult. By looking at some of the flaws in VSS, this paper has proposed a VSS application for easy monitoring of each IP Camera. Applications that have been proposed to integrate the concept of web-based geographical information system with the Google Maps API (Web-GIS. VSS applications built with smart features include maps ip-camera, live streaming of events, information on the info window and marker cluster. Test results showed that the application is able to display all the features built well

  14. An Automatic Gastrointestinal Polyp Detection System in Video Endoscopy Using Fusion of Color Wavelet and Convolutional Neural Network Features.

    Science.gov (United States)

    Billah, Mustain; Waheed, Sajjad; Rahman, Mohammad Motiur

    2017-01-01

    Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW) features and convolutional neural network (CNN) features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM). Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%.

  15. The Effect of the Instructional Media Based on Lecture Video and Slide Synchronization System on Statistics Learning Achievement

    Directory of Open Access Journals (Sweden)

    Partha Sindu I Gede

    2018-01-01

    Full Text Available The purpose of this study was to determine the effect of the use of the instructional media based on lecture video and slide synchronization system on Statistics learning achievement of the students of PTI department . The benefit of this research is to help lecturers in the instructional process i to improve student's learning achievements that lead to better students’ learning outcomes. Students can use instructional media which is created from the lecture video and slide synchronization system to support more interactive self-learning activities. Students can conduct learning activities more efficiently and conductively because synchronized lecture video and slide can assist students in the learning process. The population of this research was all students of semester VI (six majoring in Informatics Engineering Education. The sample of the research was the students of class VI B and VI D of the academic year 2016/2017. The type of research used in this study was quasi-experiment. The research design used was post test only with non equivalent control group design. The result of this research concluded that there was a significant influence in the application of learning media based on lectures video and slide synchronization system on statistics learning result on PTI department.

  16. An Automatic Gastrointestinal Polyp Detection System in Video Endoscopy Using Fusion of Color Wavelet and Convolutional Neural Network Features

    Science.gov (United States)

    Waheed, Sajjad; Rahman, Mohammad Motiur

    2017-01-01

    Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW) features and convolutional neural network (CNN) features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM). Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%. PMID:28894460

  17. An Automatic Gastrointestinal Polyp Detection System in Video Endoscopy Using Fusion of Color Wavelet and Convolutional Neural Network Features

    Directory of Open Access Journals (Sweden)

    Mustain Billah

    2017-01-01

    Full Text Available Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW features and convolutional neural network (CNN features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM. Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%.

  18. Movement control during aspiration with different injection systems via video monitoring-an in vitro model.

    Science.gov (United States)

    Kämmerer, P W; Schneider, D; Pacyna, A A; Daubländer, M

    2017-01-01

    The aim of the present study was an evaluation of movement during double aspiration by different manual syringes and one computer-controlled local anesthesia delivery system (C-CLAD). With five different devices (two disposable syringes (2, 5 ml), two aspirating syringes (active, passive), one C-CLAD), simulation of double aspiration in a phantom model was conducted. Two experienced and two inexperienced test persons carried out double aspiration with the injection systems at the right and left phantom mandibles in three different inclination angles (n = 24 × 5 × 2 for each system). 3D divergences of the needle between aspiration procedures (mm) were measured with two video cameras. An average movement for the 2-ml disposal syringe of 2.85 mm (SD 1.63), for the 5 ml syringe of 2.36 mm (SD 0.86), for the active-aspirating syringe of 2.45 mm (SD 0.9), for the passive-aspirating syringe of 2.01 mm (SD 0.7), and for the C-CLAD, an average movement of 0.91 mm (SD 0.63) was seen. The movement was significantly less for the C-CLAD compared to the other systems (p movement of the needle in the soft tissue was significantly less for the C-CLAD compared to the other systems (p movement of the syringe could be seen in comparison between manual and C-CLAD systems. Launching the aspiration by a foot pedal in computer-assisted anesthesia leads to a minor movement. To solve the problem of movement during aspiration with possibly increased false-negative results, a C-CLAD seems to be favorable.

  19. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  20. SMART VIDEO SURVEILLANCE SYSTEM FOR VEHICLE DETECTION AND TRAFFIC FLOW CONTROL

    Directory of Open Access Journals (Sweden)

    A. A. SHAFIE

    2011-08-01

    Full Text Available Traffic signal light can be optimized using vehicle flow statistics obtained by Smart Video Surveillance Software (SVSS. This research focuses on efficient traffic control system by detecting and counting the vehicle numbers at various times and locations. At present, one of the biggest problems in the main city in any country is the traffic jam during office hour and office break hour. Sometimes it can be seen that the traffic signal green light is still ON even though there is no vehicle coming. Similarly, it is also observed that long queues of vehicles are waiting even though the road is empty due to traffic signal light selection without proper investigation on vehicle flow. This can be handled by adjusting the vehicle passing time implementing by our developed SVSS. A number of experiment results of vehicle flows are discussed in this research graphically in order to test the feasibility of the developed system. Finally, adoptive background model is proposed in SVSS in order to successfully detect target objects such as motor bike, car, bus, etc.

  1. Real-Time Video Surveillance System for Traffic Management with Background Subtraction Using Codebook Model and Occlusion Handling

    Directory of Open Access Journals (Sweden)

    Moutakki Zakaria

    2017-12-01

    Full Text Available The scope of this paper is a video surveillance system constituted of three principal modules, segmentation module, vehicle classification and vehicle counting. The segmentation is based on a background subtraction by using the Codebooks method. This step aims to define the regions of interest associated with vehicles. To classify vehicles in their type, our system uses the histograms of oriented gradient followed by support vector machine. Counting and tracking vehicles will be the last task to be performed. The presence of partial occlusion involves the decrease of the accuracy of vehicle segmentation and classification, which directly impacts the robustness of a video surveillance system. Therefore, a novel method to handle the partial occlusions based on vehicle classification process have developed. The results achieved have shown that the accuracy of vehicle counting and classification exceeds the accuracy measured in some existing systems.

  2. A Programmable Video Platform and Its Application Mapping Framework Using the Target Application's SystemC Models

    Directory of Open Access Journals (Sweden)

    Kim Daewoong

    2011-01-01

    Full Text Available Abstract HD video applications can be represented with multiple tasks consisting of tightly coupled multiple threads. Each task requires massive computation, and their communication can be categorized as asynchronous distributed small data and large streaming data transfers. In this paper, we propose a high performance programmable video platform that consists of four processing element (PE clusters. Each PE cluster runs a task in the video application with RISC cores, a hardware operating system kernel (HOSK, and task-specific accelerators. PE clusters are connected with two separate point-to-point networks: one for asynchronous distributed controls and the other for heavy streaming data transfers among the tasks. Furthermore, we developed an application mapping framework, with which parallel executable codes can be obtained from a manually developed SystemC model of the target application without knowing the detailed architecture of the video platform. To show the effectivity of the platform and its mapping framework, we also present mapping results for an H.264/AVC 720p decoder/encoder and a VC-1 720p decoder with 30 fps, assuming that the platform operates at 200 MHz.

  3. Real-Time Video Surveillance System for Traffic Management with Background Subtraction Using Codebook Model and Occlusion Handling

    OpenAIRE

    Moutakki Zakaria; Ouloul Imad Mohamed; Afdel Karim; Amghar Abdellah

    2017-01-01

    The scope of this paper is a video surveillance system constituted of three principal modules, segmentation module, vehicle classification and vehicle counting. The segmentation is based on a background subtraction by using the Codebooks method. This step aims to define the regions of interest associated with vehicles. To classify vehicles in their type, our system uses the histograms of oriented gradient followed by support vector machine. Counting and tracking vehicles will be the last task...

  4. Video library for video imaging detection at intersection stop lines.

    Science.gov (United States)

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  5. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  6. A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments.

    Science.gov (United States)

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-07-02

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  7. Real-Time FPGA-Based Object Tracker with Automatic Pan-Tilt Features for Smart Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-05-01

    Full Text Available The design of smart video surveillance systems is an active research field among the computer vision community because of their ability to perform automatic scene analysis by selecting and tracking the objects of interest. In this paper, we present the design and implementation of an FPGA-based standalone working prototype system for real-time tracking of an object of interest in live video streams for such systems. In addition to real-time tracking of the object of interest, the implemented system is also capable of providing purposive automatic camera movement (pan-tilt in the direction determined by movement of the tracked object. The complete system, including camera interface, DDR2 external memory interface controller, designed object tracking VLSI architecture, camera movement controller and display interface, has been implemented on the Xilinx ML510 (Virtex-5 FX130T FPGA Board. Our proposed, designed and implemented system robustly tracks the target object present in the scene in real time for standard PAL (720 × 576 resolution color video and automatically controls camera movement in the direction determined by the movement of the tracked object.

  8. Applying the systems engineering approach to video over IP projects : workshop.

    Science.gov (United States)

    2011-12-01

    In 2009, the Texas Transportation Institute produced for the Texas Department of Transportation a document : called Video over IP Design Guidebook. This report summarizes an implementation of that project in the : form of a workshop. The workshop was...

  9. An Intelligent Active Video Surveillance System Based on the Integration of Virtual Neural Sensors and BDI Agents

    Science.gov (United States)

    Gregorio, Massimo De

    In this paper we present an intelligent active video surveillance system currently adopted in two different application domains: railway tunnels and outdoor storage areas. The system takes advantages of the integration of Artificial Neural Networks (ANN) and symbolic Artificial Intelligence (AI). This hybrid system is formed by virtual neural sensors (implemented as WiSARD-like systems) and BDI agents. The coupling of virtual neural sensors with symbolic reasoning for interpreting their outputs, makes this approach both very light from a computational and hardware point of view, and rather robust in performances. The system works on different scenarios and in difficult light conditions.

  10. Using a high-definition stereoscopic video system to teach microscopic surgery

    Science.gov (United States)

    Ilgner, Justus; Park, Jonas Jae-Hyun; Labbé, Daniel; Westhofen, Martin

    2007-02-01

    Introduction: While there is an increasing demand for minimally invasive operative techniques in Ear, Nose and Throat surgery, these operations are difficult to learn for junior doctors and demanding to supervise for experienced surgeons. The motivation for this study was to integrate high-definition (HD) stereoscopic video monitoring in microscopic surgery in order to facilitate teaching interaction between senior and junior surgeon. Material and methods: We attached a 1280x1024 HD stereo camera (TrueVisionSystems TM Inc., Santa Barbara, CA, USA) to an operating microscope (Zeiss ProMagis, Zeiss Co., Oberkochen, Germany), whose images were processed online by a PC workstation consisting of a dual Intel® Xeon® CPU (Intel Co., Santa Clara, CA). The live image was displayed by two LCD projectors @ 1280x768 pixels on a 1,25m rear-projection screen by polarized filters. While the junior surgeon performed the surgical procedure based on the displayed stereoscopic image, all other participants (senior surgeon, nurse and medical students) shared the same stereoscopic image from the screen. Results: With the basic setup being performed only once on the day before surgery, fine adjustments required about 10 minutes extra during the operation schedule, which fitted into the time interval between patients and thus did not prolong operation times. As all relevant features of the operative field were demonstrated on one large screen, four major effects were obtained: A) Stereoscopy facilitated orientation for the junior surgeon as well as for medical students. B) The stereoscopic image served as an unequivocal guide for the senior surgeon to demonstrate the next surgical steps to the junior colleague. C) The theatre nurse shared the same image, anticipating the next instruments which were needed. D) Medical students instantly share the information given by all staff and the image, thus avoiding the need for an extra teaching session. Conclusion: High definition

  11. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  12. The research on binocular stereo video imaging and display system based on low-light CMOS

    Science.gov (United States)

    Xie, Ruobing; Li, Li; Jin, Weiqi; Guo, Hong

    2015-10-01

    It is prevalent for the low-light night-vision helmet to equip the binocular viewer with image intensifiers. Such equipment can not only acquire night vision ability, but also obtain the sense of stereo vision to achieve better perception and understanding of the visual field. However, since the image intensifier is for direct-observation, it is difficult to apply the modern image processing technology. As a result, developing digital video technology in night vision is of great significance. In this paper, we design a low-light night-vision helmet with digital imaging device. It consists of three parts: a set of two low-illumination CMOS cameras, a binocular OLED micro display and an image processing PCB. Stereopsis is achieved through the binocular OLED micro display. We choose Speed-Up Robust Feature (SURF) algorithm for image registration. Based on the image matching information and the cameras' calibration parameters, disparity can be calculated in real-time. We then elaborately derive the constraints of binocular stereo display. The sense of stereo vision can be obtained by dynamically adjusting the content of the binocular OLED micro display. There is sufficient space for function extensions in our system. The performance of this low-light night-vision helmet can be further enhanced in combination with The HDR technology and image fusion technology, etc.

  13. Comprehensive depth estimation algorithm for efficient stereoscopic content creation in three-dimensional video systems

    Science.gov (United States)

    Xu, Huihui; Jiang, Mingyan

    2015-07-01

    Two-dimensional to three-dimensional (3-D) conversion in 3-D video applications has attracted great attention as it can alleviate the problem of stereoscopic content shortage. Depth estimation is an essential part of this conversion since the depth accuracy directly affects the quality of a stereoscopic image. In order to generate a perceptually reasonable depth map, a comprehensive depth estimation algorithm that considers the scenario type is presented. Based on the human visual system mechanism, which is sensitive to a change in the scenario, this study classifies the type of scenario into four classes according to the relationship between the movements of the camera and the object, and then leverages different strategies on the basis of the scenario type. The proposed strategies efficiently extract the depth information from different scenarios. In addition, the depth generation method for a scenario in which there is no motion, neither of the object nor the camera, is also suitable for the single image. Qualitative and quantitative evaluation results demonstrate that the proposed depth estimation algorithm is very effective for generating stereoscopic content and providing a realistic visual experience.

  14. Ultrahigh-definition color video camera system with 4K-scanning lines

    Science.gov (United States)

    Mitani, Kohji; Sugawara, Masayuki; Shimamoto, Hiroshi; Yamashita, Takayuki; Okano, Fumio

    2003-05-01

    An experimental ultrahigh-definition color video camera system with 7680(H) × 4320(V) pixels has been developed using four 8-million-pixel CCDs. The 8-million-pixel CCD with a progressive scanning rate of 60 frames per second has 4046(H) × 2048(V) effective imaging pixels, each of which is 8.4 micron2. We applied the four-imager pickup method to increase the camera"s resolution. This involves attaching four CCDs to a special color-separation prism. Two CCDs are used for the green image, and the other two are used for red and blue. The spatial image sampling pattern of these CCDs to the optical image is equivalent to one with 32 million pixels in the Bayer pattern color filter. The prototype camera attains a limiting resolution of more than 2700 TV lines both horizontally and vertically, which is higher than that of an 8-million-CCD. The sensitivity of the camera is 2000 lux, F 2.8 at approx. 50 dB of dark-noise level on the HDTV format. Its other specifications are a dynamic range of 200%, a power consumption of about 600 W and a weight, with lens, of 76 kg.

  15. Development of a large-screen high-definition laser video projection system

    Science.gov (United States)

    Clynick, Tony J.

    1991-08-01

    A prototype laser video projector which uses electronic, optical, and mechanical means to project a television picture is described. With the primary goal of commercial viability, the price/performance ratio of the chosen means is critical. The fundamental requirement has been to achieve high brightness, high definition images of at least movie-theater size, at a cost comparable with other existing large-screen video projection technologies, while having the opportunity of developing and exploiting the unique properties of the laser projected image, such as its infinite depth-of-field. Two argon lasers are used in combination with a dye laser to achieve a range of colors which, despite not being identical to those of a CRT, prove to be subjectively acceptable. Acousto-optic modulation in combination with a rotary polygon scanner, digital video line stores, novel specialized electro-optics, and a galvanometric frame scanner form the basis of the projection technique achieving a 30 MHz video bandwidth, high- definition scan rates (1125/60 and 1250/50), high contrast ratio, and good optical efficiency. Auditorium projection of HDTV pictures wider than 20 meters are possible. Applications including 360 degree(s) projection and 3-D video provide further scope for exploitation of the HD laser video projector.

  16. Videos & Tools: MedlinePlus

    Science.gov (United States)

    ... of this page: https://medlineplus.gov/videosandcooltools.html Videos & Tools To use the sharing features on this page, please enable JavaScript. Watch health videos on topics such as anatomy, body systems, and ...

  17. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Directory of Open Access Journals (Sweden)

    Nakamura Satoshi

    2004-01-01

    Full Text Available We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  18. Digital Video Editing

    Science.gov (United States)

    McConnell, Terry

    2004-01-01

    Monica Adams, head librarian at Robinson Secondary in Fairfax country, Virginia, states that librarians should have the technical knowledge to support projects related to digital video editing. The process of digital video editing and the cables, storage issues and the computer system with software is described.

  19. Video-Based Big Data Analytics in Cyberlearning

    Science.gov (United States)

    Wang, Shuangbao; Kelly, William

    2017-01-01

    In this paper, we present a novel system, inVideo, for video data analytics, and its use in transforming linear videos into interactive learning objects. InVideo is able to analyze video content automatically without the need for initial viewing by a human. Using a highly efficient video indexing engine we developed, the system is able to analyze…

  20. An innovative approach for assessing the ergonomic risks of lifting tasks using a video motion capture system

    OpenAIRE

    Wilson, Rhoda M.

    2006-01-01

    Human Systems Integration Report Low back pain (LBP) and work-related musculoskeletal disorders (WMSDs) can lead to employee absenteeism, sick leave, and permanent disability. Over the years, much work has been done in examining physical exposure to ergonomic risks. The current research presents a new approach for assessing WMSD risk during lifting related tasks that combines traditional observational methods with video recording methods. One particular application area, the Future Com...

  1. Design and Implementation of Dual-Mode Wireless Video Monitoring System

    Directory of Open Access Journals (Sweden)

    BAO Song-Jian

    2014-10-01

    Full Text Available Dual-mode wireless video transmission has two major problems. Firstly, one is time delay difference bringing about asynchronous reception decoding frame error phenomenon; secondly, dual-mode network bandwidth inconformity causes scheduling problem. In order to solve above two problems, a kind of TD-SCDMA/CDMA20001x dual-mode wireless video transmission design method is proposed. For the solution of decoding frame error phenomenon, the design puts forward adding frame identification and packet preprocessing at the sending and synchronizing combination at the receiving end. For the solution of scheduling problem, the wireless communication channel cooperative work and video data transmission scheduling management algorithm is proposed in the design.

  2. Chaos based video encryption using maps and Ikeda time delay system

    Science.gov (United States)

    Valli, D.; Ganesan, K.

    2017-12-01

    Chaos based cryptosystems are an efficient method to deal with improved speed and highly secured multimedia encryption because of its elegant features, such as randomness, mixing, ergodicity, sensitivity to initial conditions and control parameters. In this paper, two chaos based cryptosystems are proposed: one is the higher-dimensional 12D chaotic map and the other is based on the Ikeda delay differential equation (DDE) suitable for designing a real-time secure symmetric video encryption scheme. These encryption schemes employ a substitution box (S-box) to diffuse the relationship between pixels of plain video and cipher video along with the diffusion of current input pixel with the previous cipher pixel, called cipher block chaining (CBC). The proposed method enhances the robustness against statistical, differential and chosen/known plain text attacks. Detailed analysis is carried out in this paper to demonstrate the security and uniqueness of the proposed scheme.

  3. The optical design of ultra-short throw system for panel emitted theater video system

    Science.gov (United States)

    Huang, Jiun-Woei

    2015-07-01

    In the past decade, the display format from (HD High Definition) through Full HD(1920X1080) to UHD(4kX2k), mainly guides display industry to two directions: one is liquid crystal display(LCD) from 10 inch to 100 inch and more, and the other is projector. Although LCD has been popularly used in market; however, the investment for production such kind displays cost more money expenditure, and less consideration of environmental pollution and protection[1]. The Projection system may be considered, due to more viewing access, flexible in location, energy saving and environmental protection issues. The topic is to design and fabricate a short throw factor liquid crystal on silicon (LCoS) projection system for cinema. It provides a projection lens system, including a tele-centric lens fitted for emitted LCoS to collimate light to enlarge the field angle. Then, the optical path is guided by a symmetric lens. Light of LCoS may pass through the lens, hit on and reflect through an aspherical mirror, to form a less distortion image on blank wall or screen for home cinema. The throw ratio is less than 0.33.

  4. Video Laryngoscopic Techniques Associated with Intubation Success in a Helicopter Emergency Medical Service System.

    Science.gov (United States)

    Naito, Hiromichi; Guyette, Francis X; Martin-Gill, Christian; Callaway, Clifton W

    2016-01-01

    Video laryngoscopy (VL) is a technical adjunct to facilitate endotracheal intubation (ETI). VL also provides objective data for training and quality improvement, allowing evaluation of the technique and airway conditions during ETI. Previous studies of factors associated with ETI success or failure are limited by insufficient nomenclature, individual recall bias and self-report. We tested whether the covariates in prehospital VL recorded data were associated with ETI success. We also measured association between time and clinical variables. Retrospective review was conducted in a non-physician staffed helicopter emergency medical service system. ETI was typically performed using sedation and neuromuscular-blockade under protocolized orders. We obtained process and outcome variables from digitally recorded VL data. Patient characteristics data were also obtained from the emergency medical service record and linked to the VL recorded data. The primary outcome was to identify VL covariates associated with successful ETI attempts. Among 304 VL recorded ETI attempts in 268 patients, ETI succeeded for 244 attempts and failed for 60 attempts (first-pass success rate, 82% and overall success rate, 94%). Laryngoscope blade tip usually moved from a shallow position in the oropharynx to the vallecula. In the multivariable logistic regression analysis, attempt time (p = 0.02; odds ratio [OR] 0.99), Cormack-Lehane view (p Cormack-Lehane view, and longer ETI attempt time were negatively associated with successful ETI attempts. Initially shallow blade tip position may associate with longer ETI time. VL is useful for measuring and describing multiple factors of ETI and can provide valuable data.

  5. Hardware/Software Issues for Video Guidance Systems: The Coreco Frame Grabber

    Science.gov (United States)

    Bales, John W.

    1996-01-01

    The F64 frame grabber is a high performance video image acquisition and processing board utilizing the TMS320C40 and TMS34020 processors. The hardware is designed for the ISA 16 bit bus and supports multiple digital or analog cameras. It has an acquisition rate of 40 million pixels per second, with a variable sampling frequency of 510 kHz to MO MHz. The board has a 4MB frame buffer memory expandable to 32 MB, and has a simultaneous acquisition and processing capability. It supports both VGA and RGB displays, and accepts all analog and digital video input standards.

  6. Digital video.

    Science.gov (United States)

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.

  7. An efficient HW and SW design of H.264 video compression, storage and playback on FPGA devices for handheld thermal imaging systems

    Science.gov (United States)

    Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih

    2017-05-01

    Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.

  8. Smart Streaming for Online Video Services

    OpenAIRE

    Chen, Liang; Zhou, Yipeng; Chiu, Dah Ming

    2013-01-01

    Bandwidth consumption is a significant concern for online video service providers. Practical video streaming systems usually use some form of HTTP streaming (progressive download) to let users download the video at a faster rate than the video bitrate. Since users may quit before viewing the complete video, however, much of the downloaded video will be "wasted". To the extent that users' departure behavior can be predicted, we develop smart streaming that can be used to improve user QoE with ...

  9. Personalized video summarization based on group scoring

    OpenAIRE

    Darabi, K; G. Ghinea

    2014-01-01

    In this paper an expert-based model for generation of personalized video summaries is suggested. The video frames are initially scored and annotated by multiple video experts. Thereafter, the scores for the video segments that have been assigned the higher priorities by end users will be upgraded. Considering the required summary length, the highest scored video frames will be inserted into a personalized final summary. For evaluation purposes, the video summaries generated by our system have...

  10. Survey lines of the video and photos from the mini-SEABOSS sampling system acquired in Boston Harbor and approaches (surveylines_vid)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — These data are the trackline from the seafloor photograph and video survey conducted September 2004 using the mini-SeaBOSS sampling system on the R/V Rafael in...

  11. Developing a Video Steganography Toolkit

    OpenAIRE

    Ridgway, James; Stannett, Mike

    2014-01-01

    Although techniques for separate image and audio steganography are widely known, relatively little has been described concerning the hiding of information within video streams ("video steganography"). In this paper we review the current state of the art in this field, and describe the key issues we have encountered in developing a practical video steganography system. A supporting video is also available online at http://www.youtube.com/watch?v=YhnlHmZolRM

  12. Automatic Traffic Data Collection under Varying Lighting and Temperature Conditions in Multimodal Environments: Thermal versus Visible Spectrum Video-Based Systems

    Directory of Open Access Journals (Sweden)

    Ting Fu

    2017-01-01

    Full Text Available Vision-based monitoring systems using visible spectrum (regular video cameras can complement or substitute conventional sensors and provide rich positional and classification data. Although new camera technologies, including thermal video sensors, may improve the performance of digital video-based sensors, their performance under various conditions has rarely been evaluated at multimodal facilities. The purpose of this research is to integrate existing computer vision methods for automated data collection and evaluate the detection, classification, and speed measurement performance of thermal video sensors under varying lighting and temperature conditions. Thermal and regular video data was collected simultaneously under different conditions across multiple sites. Although the regular video sensor narrowly outperformed the thermal sensor during daytime, the performance of the thermal sensor is significantly better for low visibility and shadow conditions, particularly for pedestrians and cyclists. Retraining the algorithm on thermal data yielded an improvement in the global accuracy of 48%. Thermal speed measurements were consistently more accurate than for the regular video at daytime and nighttime. Thermal video is insensitive to lighting interference and pavement temperature, solves issues associated with visible light cameras for traffic data collection, and offers other benefits such as privacy, insensitivity to glare, storage space, and lower processing requirements.

  13. Video Games and Education: Designing Learning Systems for an Interactive Age

    Science.gov (United States)

    Squire, Kurt D.

    2008-01-01

    Recently, attention has been paid to computer and video games as a medium for learning. This article provides a way of conceptualizing them as possibility spaces for learning. It provides an overview of two research programs: (1) an after-school program using commercial games to develop deep expertise in game play and game creation, and (2) an…

  14. Selecting the video frequency bandwidth for a remote-sensing opto-electronic scanning system

    Science.gov (United States)

    Jahn, H.; Oertel, D.

    1980-10-01

    The present analysis deals with the influence of the videochannel harmonic response characteristic of a push-broom scanner on the spatial transmission function and the signal-to-noise ratio. It is shown that when detector noise is prevalent, the video frequency bandwidth influences both the transmission function and the SNR, but influences only the transmission function when the photonoise prevails.

  15. Underwater Communications for Video Surveillance Systems at 2.4 GHz.

    Science.gov (United States)

    Sendra, Sandra; Lloret, Jaime; Jimenez, Jose Miguel; Rodrigues, Joel J P C

    2016-10-23

    Video surveillance is needed to control many activities performed in underwater environments. The use of wired media can be a problem since the material specially designed for underwater environments is very expensive. In order to transmit the images and videos wirelessly under water, three main technologies can be used: acoustic waves, which do not provide high bandwidth, optical signals, although the effect of light dispersion in water severely penalizes the transmitted signals and therefore, despite offering high transfer rates, the maximum distance is very small, and electromagnetic (EM) waves, which can provide enough bandwidth for video delivery. In the cases where the distance between transmitter and receiver is short, the use of EM waves would be an interesting option since they provide high enough data transfer rates to transmit videos with high resolution. This paper presents a practical study of the behavior of EM waves at 2.4 GHz in freshwater underwater environments. First, we discuss the minimum requirements of a network to allow video delivery. From these results, we measure the maximum distance between nodes and the round trip time (RTT) value depending on several parameters such as data transfer rate, signal modulations, working frequency, and water temperature. The results are statistically analyzed to determine their relation. Finally, the EM waves' behavior is modeled by a set of equations. The results show that there are some combinations of working frequency, modulation, transfer rate and temperature that offer better results than others. Our work shows that short communication distances with high data transfer rates is feasible.

  16. Underwater Communications for Video Surveillance Systems at 2.4 GHz

    Directory of Open Access Journals (Sweden)

    Sandra Sendra

    2016-10-01

    Full Text Available Video surveillance is needed to control many activities performed in underwater environments. The use of wired media can be a problem since the material specially designed for underwater environments is very expensive. In order to transmit the images and videos wirelessly under water, three main technologies can be used: acoustic waves, which do not provide high bandwidth, optical signals, although the effect of light dispersion in water severely penalizes the transmitted signals and therefore, despite offering high transfer rates, the maximum distance is very small, and electromagnetic (EM waves, which can provide enough bandwidth for video delivery. In the cases where the distance between transmitter and receiver is short, the use of EM waves would be an interesting option since they provide high enough data transfer rates to transmit videos with high resolution. This paper presents a practical study of the behavior of EM waves at 2.4 GHz in freshwater underwater environments. First, we discuss the minimum requirements of a network to allow video delivery. From these results, we measure the maximum distance between nodes and the round trip time (RTT value depending on several parameters such as data transfer rate, signal modulations, working frequency, and water temperature. The results are statistically analyzed to determine their relation. Finally, the EM waves’ behavior is modeled by a set of equations. The results show that there are some combinations of working frequency, modulation, transfer rate and temperature that offer better results than others. Our work shows that short communication distances with high data transfer rates is feasible.

  17. Enhanced video display and navigation for networked streaming video and networked video playlists

    Science.gov (United States)

    Deshpande, Sachin

    2006-01-01

    In this paper we present an automatic enhanced video display and navigation capability for networked streaming video and networked video playlists. Our proposed method uses Synchronized Multimedia Integration Language (SMIL) as presentation language and Real Time Streaming Protocol (RTSP) as network remote control protocol to automatically generate a "enhanced video strip" display for easy navigation. We propose and describe two approaches - a smart client approach and a smart server approach. We also describe a prototype system implementation of our proposed approach.

  18. High-speed three-frame image recording system using colored flash units and low-cost video equipment

    Science.gov (United States)

    Racca, Roberto G.; Scotten, Larry N.

    1995-05-01

    This article describes a method that allows the digital recording of sequences of three black and white images at rates of several thousand frames per second using a system consisting of an ordinary CCD camcorder, three flash units with color filters, a PC-based frame grabber board and some additional electronics. The maximum framing rate is determined by the duration of the flashtube emission, and for common photographic flash units lasting about 20 microsecond(s) it can exceed 10,000 frames per second in actual use. The subject under study is strobe- illuminated using a red, a green and a blue flash unit controlled by a special sequencer, and the three images are captured by a color CCD camera on a single video field. Color is used as the distinguishing parameter that allows the overlaid exposures to be resolved. The video output for that particular field will contain three individual scenes, one for each primary color component, which potentially can be resolved with no crosstalk between them. The output is electronically decoded into the primary color channels, frame grabbed and stored into digital memory, yielding three time-resolved images of the subject. A synchronization pulse provided by the flash sequencer triggers the frame grabbing so that the correct video field is acquired. A scheme involving the use of videotape as intermediate storage allows the frame grabbing to be performed using a monochrome video digitizer. Ideally each flash- illuminated scene would be confined to one color channel, but in practice various factors, both optical and electronic, affect color separation. Correction equations have been derived that counteract these effects in the digitized images and minimize 'ghosting' between frames. Once the appropriate coefficients have been established through a calibration procedure that needs to be performed only once for a given configuration of the equipment, the correction process is carried out transparently in software every time a

  19. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  20. A distributed video retrieval system utilising broadband networked PC's for educational applications

    OpenAIRE

    Van Reeth, Frank; Raymaekers, Chris; TREKELS, Peter; VERKOYEN, Stefan; FLERACKERS, Eddy

    1998-01-01

    Conventional educational material is even more complemented with computer-based multimedia material. In order to make this material available to teachers and students in a structured manner, we developed a multimedia database and accompanying tools for creating, manipulating and formatting the teaching content. Recently, we expanded this educational multimedia database with the functionality to support streamed video as well. Given the vast amounts of data that needs to be stored and transmi...

  1. Development of a Wireless Video Transfer System for Remote Control of a Lightweight UAV

    OpenAIRE

    Tosteberg, Joakim; Axelsson, Thomas

    2012-01-01

    A team of developers from Epsilon AB has developed a lightweight remote controlledquadcopter named Crazyflie. The team wants to allow a pilot to navigate thequadcopter using video from an on-board camera as the only guidance. The masterthesis evaluates the feasibility of mounting a camera module on the quadcopter andstreaming images from the camera to a computer, using the existing quadcopterradio link. Using theoretical calculations and measurements, a set of requirementsthat must be fulfill...

  2. Stitching Stabilizer: Two-frame-stitching Video Stabilization for Embedded Systems

    OpenAIRE

    Satoh, Masaki

    2016-01-01

    In conventional electronic video stabilization, the stabilized frame is obtained by cropping the input frame to cancel camera shake. While a small cropping size results in strong stabilization, it does not provide us satisfactory results from the viewpoint of image quality, because it narrows the angle of view. By fusing several frames, we can effectively expand the area of input frames, and achieve strong stabilization even with a large cropping size. Several methods for doing so have been s...

  3. Variations of the Respiration Signals for Respiratory-Gated Radiotherapy Using the Video Coached Respiration Guiding System

    CERN Document Server

    Lee, Hyun Jeong; Oh, Se An

    2015-01-01

    Respiratory-gated radiation therapy (RGRT) has been used to minimize the dose to normal tissue in lung-cancer radiotherapy. The present research aims to improve the regularity of respiration in RGRT using a video coached respiration guiding system. In the study, 16 patients with lung cancer were evaluated. The respiration signals of the patients were measured by a real-time position management (RPM) Respiratory Gating System (Varian, USA) and the patients were trained using the video coached respiration guiding system. The patients performed free breathing and guided breathing, and the respiratory cycles were acquired for ~5 min. Then, Microsoft Excel 2010 software was used to calculate the mean and standard deviation for each phase. The standard deviation was computed in order to analyze the improvement in the respiratory regularity with respect to the period and displacement. The standard deviation of the guided breathing decreased to 65.14% in the inhale peak and 71.04% in the exhale peak compared with the...

  4. Demonstration of 2.97-Gb/s video signal transmissions in DML-based IM-DDO-OFDM systems

    Science.gov (United States)

    Chen, Ming; He, Jing; Deng, Rui; Chen, Qinghui; Zhang, Jinlong; Chen, Lin

    2016-05-01

    To further investigate the feasibility of the digital signal processing (DSP) algorithms (e.g., symbol timing synchronization, channel estimation and equalization, and sampling clock frequency offset (SCFO) estimation and compensation) for real-time optical orthogonal frequency-division multiplexing (OFDM) system, 2.97-Gb/s real-time high-definition video signal parallel transmission is experimentally demonstrated in OFDM-based short-reach intensity-modulated direct-detection (IM-DD) systems. The experimental results show that, in the presence of ∼12 ppm SCFO between transmitter and receiver, the adaptively modulated OFDM signal transmission over 20 km standard single-mode fiber with an error bit rate less than 1 × 10-9 can be achieved by using only DSP-based small SCFO estimation and compensation method without utilizing forward error correction technique. To the best of our knowledge, for the first time, we successfully demonstrate that the video signal at a bit rate in excess of 1-Gb/s transmission in a simple real-valued inverse fast Fourier transform and fast Fourier transform based IM-DD optical OFDM system employing a directly modulated laser.

  5. Toward a More Usable Home-Based Video Telemedicine System: A Heuristic Evaluation of the Clinician User Interfaces of Home-Based Video Telemedicine Systems.

    Science.gov (United States)

    Agnisarman, Sruthy; Narasimha, Shraddhaa; Chalil Madathil, Kapil; Welch, Brandon; Brinda, Fnu; Ashok, Aparna; McElligott, James

    2017-04-24

    Telemedicine is the use of technology to provide and support health care when distance separates the clinical service and the patient. Home-based telemedicine systems involve the use of such technology for medical support and care connecting the patient from the comfort of their homes with the clinician. In order for such a system to be used extensively, it is necessary to understand not only the issues faced by the patients in using them but also the clinician. The aim of this study was to conduct a heuristic evaluation of 4 telemedicine software platforms-Doxy.me, Polycom, Vidyo, and VSee-to assess possible problems and limitations that could affect the usability of the system from the clinician's perspective. It was found that 5 experts individually evaluated all four systems using Nielsen's list of heuristics, classifying the issues based on a severity rating scale. A total of 46 unique problems were identified by the experts. The heuristics most frequently violated were visibility of system status and Error prevention amounting to 24% (11/46 issues) each. Esthetic and minimalist design was second contributing to 13% (6/46 issues) of the total errors. Heuristic evaluation coupled with a severity rating scale was found to be an effective method for identifying problems with the systems. Prioritization of these problems based on the rating provides a good starting point for resolving the issues affecting these platforms. There is a need for better transparency and a more streamlined approach for how physicians use telemedicine systems. Visibility of the system status and speaking the users' language are keys for achieving this.

  6. Ultrafast video imaging of cell division from zebrafish egg using multimodal microscopic system

    Science.gov (United States)

    Lee, Sung-Ho; Jang, Bumjoon; Kim, Dong Hee; Park, Chang Hyun; Bae, Gyuri; Park, Seung Woo; Park, Seung-Han

    2017-07-01

    Unlike those of other ordinary laser scanning microscopies in the past, nonlinear optical laser scanning microscopy (SHG, THG microscopy) applied ultrafast laser technology which has high peak powers with relatively inexpensive, low-average-power. It short pulse nature reduces the ionization damage in organic molecules. And it enables us to take bright label-free images. In this study, we measured cell division of zebrafish egg with ultrafast video images using multimodal nonlinear optical microscope. The result shows in-vivo cell division label-free imaging with sub-cellular resolution.

  7. A comparison of the BURP and conventional and modified jaw thrust manoeuvres for orotracheal intubation using the Clarus Video System.

    Science.gov (United States)

    Lee, A R; Yang, S; Shin, Y H; Kim, J A; Chung, I S; Cho, H S; Lee, J J

    2013-09-01

    We evaluated the effects of three airway manipulation manoeuvres: (a) conventional (single-handed chin lift); (b) backward, upward and right-sided pressure (BURP) manoeuvre; and (c) modified jaw thrust manoeuvre (two-handed aided by an assistant) on laryngeal view and intubation time using the Clarus Video System in 215 patients undergoing general anaesthesia with orotracheal intubation. In the first part of this study, the laryngeal view was recorded as a modified Cormack-Lehane grade with each manoeuvre. In the second part, intubation was performed using the assigned airway manipulation. The primary outcome was the time to intubation, and the secondary outcomes were the modified Cormack-Lehane grade, the number of attempts and the overall success rate. There were significant differences in modified Cormack-Lehane grade between the three airway manipulations (p < 0.0001). Post-hoc analysis indicated that the modified jaw thrust improved the laryngeal view compared with the conventional (p < 0.0001) and the BURP manoeuvres (p < 0.0001). The BURP worsened the laryngeal view compared with the conventional manoeuvre (p = 0.0132). The time to intubation in the modified jaw thrust group was shorter than with the conventional manoeuvre (p = 0.0004) and the BURP group (p < 0.0001). We conclude that the modified jaw thrust is the most effective manoeuvre at improving the laryngeal view and shortening intubation time with the Clarus Video System. © 2013 The Association of Anaesthetists of Great Britain and Ireland.

  8. Give-to-Get : Free-riding-resilient Video-on-Demand in P2P Systems

    NARCIS (Netherlands)

    Mol, J.J.D.; Pouwelse, J.A.; Meulpolder, M.; Epema, D.H.J.; Sips, H.J.

    2008-01-01

    Centralised solutions for Video-on-Demand (VoD) services, which stream pre-recorded video content to multiple clients who start watching at the moments of their own choosing, are not scalable because of the high bandwidth requirements of the central video servers. Peer-to-peer (P2P) techniques which

  9. Prototype of a computer system for managing data and video colonoscopy exams

    Directory of Open Access Journals (Sweden)

    Renato Bobsin Machado

    2012-03-01

    Full Text Available OBJECTIVE: Develop a prototype using computer resources to optimize the management process of clinical information and video colonoscopy exams. MATERIALS AND METHODS: Through meetings with medical and computer experts, the following requirements were defined: management of information about medical professionals, patients and exams; video and image captured by video colonoscopes during the exam, and the availability of these videos and images on the Web for further analysis. The technologies used were Java, Flex, JBoss, Red5, JBoss SEAM, MySQL and Flamingo. RESULTS AND DISCUSSION: The prototype contributed to the area of colonocospy by providing resources to maintain the patients' history, tests and images from video colonoscopies. The web-based application allows greater flexibility to physicians and specialists. The resources for remote analysis of data and tests can help doctors and patients in the examination and diagnosis. CONCLUSION: The implemented prototype has contributed to improve colonoscopy-related processes. Future activities include the prototype deployment in the Service of Coloproctology and the utilization of this model to allow real-time monitoring of these exams and knowledge extraction from such structured database using artificial intelligence.OBJETIVO: Desenvolver um protótipo por meio de recursos computacionais para a otimização de processos de gerenciamento de informações clínicas e de exames de videocolonoscopia. MATERIAIS E MÉTODOS: Por meio de reuniões com especialistas médicos e computacionais, definiram-se os seguintes requisitos: gestão de informações sobre profissionais médicos, pacientes e exames complementares; aquisição dos vídeos e captura de imagens a partir do videocolonoscópio durante a realização desse exame, e a disponibilidade por meio da Web para análise posterior dessas imagens. As tecnologias aplicadas foram: Java, Flex, JBOSS, Red5, JBOSS SEAM, MySQL e Flamingo. RESULTADOS E

  10. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...

  11. Robust Watermarking of Video Streams

    Directory of Open Access Journals (Sweden)

    T. Polyák

    2006-01-01

    Full Text Available In the past few years there has been an explosion in the use of digital video data. Many people have personal computers at home, and with the help of the Internet users can easily share video files on their computer. This makes possible the unauthorized use of digital media, and without adequate protection systems the authors and distributors have no means to prevent it.Digital watermarking techniques can help these systems to be more effective by embedding secret data right into the video stream. This makes minor changes in the frames of the video, but these changes are almost imperceptible to the human visual system. The embedded information can involve copyright data, access control etc. A robust watermark is resistant to various distortions of the video, so it cannot be removed without affecting the quality of the host medium. In this paper I propose a video watermarking scheme that fulfills the requirements of a robust watermark. 

  12. Streaming video-based 3D reconstruction method compatible with existing monoscopic and stereoscopic endoscopy systems

    Science.gov (United States)

    Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul

    2012-06-01

    Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.

  13. 61214++++','DOAJ-ART-EN'); return false;" href="+++++https://jual.nipissingu.ca/wp-content/uploads/sites/25/2014/06/v61214.m4v">61214++++">Jailed - Video

    Directory of Open Access Journals (Sweden)

    Cameron CULBERT

    2012-07-01

    Full Text Available As the public education system in Northern Ontario continues to take a downward spiral, a plethora of secondary school students are being placed in an alternative educational environment. Juxtaposing the two educational settings reveals very similar methods and characteristics of educating our youth as opposed to using a truly alternative approach to education. This video reviews the relationship between public education and alternative education in a remote Northern Ontario setting. It is my belief that the traditional methods of teaching are not appropriate in educating at risk students in alternative schools. Paper and pencil worksheets do not motivate these students to learn and succeed. Alternative education should emphasize experiential learning, a just in time curriculum based on every unique individual and the students true passion for everyday life. Cameron Culbert was born on February 3rd, 1977 in North Bay, Ontario. His teenage years were split between attending public school and his willed curriculum on the ski hill. Culbert spent 10 years (1996-2002 & 2006-2010 competing for Canada as an alpine ski racer. His passion for teaching and coaching began as an athlete and has now transferred into the classroom and the community. As a graduate of Nipissing University (BA, BEd, MEd. Camerons research interests are alternative education, physical education and technology in the classroom. Currently Cameron is an active educator and coach in Northern Ontario.

  14. Video quality assessment for web content mirroring

    Science.gov (United States)

    He, Ye; Fei, Kevin; Fernandez, Gustavo A.; Delp, Edward J.

    2014-03-01

    Due to the increasing user expectation on watching experience, moving web high quality video streaming content from the small screen in mobile devices to the larger TV screen has become popular. It is crucial to develop video quality metrics to measure the quality change for various devices or network conditions. In this paper, we propose an automated scoring system to quantify user satisfaction. We compare the quality of local videos with the videos transmitted to a TV. Four video quality metrics, namely Image Quality, Rendering Quality, Freeze Time Ratio and Rate of Freeze Events are used to measure video quality change during web content mirroring. To measure image quality and rendering quality, we compare the matched frames between the source video and the destination video using barcode tools. Freeze time ratio and rate of freeze events are measured after extracting video timestamps. Several user studies are conducted to evaluate the impact of each objective video quality metric on the subjective user watching experience.

  15. Development of a video image-based QA system for the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system

    Energy Technology Data Exchange (ETDEWEB)

    Ebe, Kazuyu, E-mail: nrr24490@nifty.com; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji [Joetsu General Hospital, 616 Daido-Fukuda, Joetsu-shi, Niigata 943-8507 (Japan); Sugimoto, Satoru [Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo 113-8421 (Japan); Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi [Graduate School of Medical and Dental Sciences, Niigata University, Niigata 951-8510 (Japan); Court, Laurence [The University of Texas MD Anderson Cancer Center, Houston, Texas 77030-4009 (United States)

    2015-08-15

    Purpose: To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Methods: Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio–caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient’s tumor motion. A substitute target with the patient’s tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors’ QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients’ tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Results: Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors

  16. Negotiation for Strategic Video Games

    OpenAIRE

    Afiouni, Einar Nour; Øvrelid, Leif Julian

    2013-01-01

    This project aims to examine the possibilities of using game theoretic concepts and multi-agent systems in modern video games with real time demands. We have implemented a multi-issue negotiation system for the strategic video game Civilization IV, evaluating different negotiation techniques with a focus on the use of opponent modeling to improve negotiation results.

  17. Automated analysis and annotation of basketball video

    Science.gov (United States)

    Saur, Drew D.; Tan, Yap-Peng; Kulkarni, Sanjeev R.; Ramadge, Peter J.

    1997-01-01

    Automated analysis and annotation of video sequences are important for digital video libraries, content-based video browsing and data mining projects. A successful video annotation system should provide users with useful video content summary in a reasonable processing time. Given the wide variety of video genres available today, automatically extracting meaningful video content for annotation still remains hard by using current available techniques. However, a wide range video has inherent structure such that some prior knowledge about the video content can be exploited to improve our understanding of the high-level video semantic content. In this paper, we develop tools and techniques for analyzing structured video by using the low-level information available directly from MPEG compressed video. Being able to work directly in the video compressed domain can greatly reduce the processing time and enhance storage efficiency. As a testbed, we have developed a basketball annotation system which combines the low-level information extracted from MPEG stream with the prior knowledge of basketball video structure to provide high level content analysis, annotation and browsing for events such as wide- angle and close-up views, fast breaks, steals, potential shots, number of possessions and possession times. We expect our approach can also be extended to structured video in other domains.

  18. Miniature stereoscopic video system provides real-time 3D registration and image fusion for minimally invasive surgery

    Science.gov (United States)

    Yaron, Avi; Bar-Zohar, Meir; Horesh, Nadav

    2007-02-01

    Sophisticated surgeries require the integration of several medical imaging modalities, like MRI and CT, which are three-dimensional. Many efforts are invested in providing the surgeon with this information in an intuitive & easy to use manner. A notable development, made by Visionsense, enables the surgeon to visualize the scene in 3D using a miniature stereoscopic camera. It also provides real-time 3D measurements that allow registration of navigation systems as well as 3D imaging modalities, overlaying these images on the stereoscopic video image in real-time. The real-time MIS 'see through tissue' fusion solutions enable the development of new MIS procedures in various surgical segments, such as spine, abdomen, cardio-thoracic and brain. This paper describes 3D surface reconstruction and registration methods using Visionsense camera, as a step toward fully automated multi-modality 3D registration.

  19. A Survey and Proposed Framework on the Soft Biometrics Technique for Human Identification in Intelligent Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Min-Gu Kim

    2012-01-01

    Full Text Available Biometrics verification can be efficiently used for intrusion detection and intruder identification in video surveillance systems. Biometrics techniques can be largely divided into traditional and the so-called soft biometrics. Whereas traditional biometrics deals with physical characteristics such as face features, eye iris, and fingerprints, soft biometrics is concerned with such information as gender, national origin, and height. Traditional biometrics is versatile and highly accurate. But it is very difficult to get traditional biometric data from a distance and without personal cooperation. Soft biometrics, although featuring less accuracy, can be used much more freely though. Recently, many researchers have been made on human identification using soft biometrics data collected from a distance. In this paper, we use both traditional and soft biometrics for human identification and propose a framework for solving such problems as lighting, occlusion, and shadowing.

  20. RGBD Video Based Human Hand Trajectory Tracking and Gesture Recognition System

    Directory of Open Access Journals (Sweden)

    Weihua Liu

    2015-01-01

    Full Text Available The task of human hand trajectory tracking and gesture trajectory recognition based on synchronized color and depth video is considered. Toward this end, in the facet of hand tracking, a joint observation model with the hand cues of skin saliency, motion and depth is integrated into particle filter in order to move particles to local peak in the likelihood. The proposed hand tracking method, namely, salient skin, motion, and depth based particle filter (SSMD-PF, is capable of improving the tracking accuracy considerably, in the context of the signer performing the gesture toward the camera device and in front of moving, cluttered backgrounds. In the facet of gesture recognition, a shape-order context descriptor on the basis of shape context is introduced, which can describe the gesture in spatiotemporal domain. The efficient shape-order context descriptor can reveal the shape relationship and embed gesture sequence order information into descriptor. Moreover, the shape-order context leads to a robust score for gesture invariant. Our approach is complemented with experimental results on the settings of the challenging hand-signed digits datasets and American sign language dataset, which corroborate the performance of the novel techniques.

  1. Novel approach to automatically classify rat social behavior using a video tracking system.

    Science.gov (United States)

    Peters, Suzanne M; Pinter, Ilona J; Pothuizen, Helen H J; de Heer, Raymond C; van der Harst, Johanneke E; Spruijt, Berry M

    2016-08-01

    In the past, studies in behavioral neuroscience and drug development have relied on simple and quick readout parameters of animal behavior to assess treatment efficacy or to understand underlying brain mechanisms. The predominant use of classical behavioral tests has been repeatedly criticized during the last decades because of their poor reproducibility, poor translational value and the limited explanatory power in functional terms. We present a new method to monitor social behavior of rats using automated video tracking. The velocity of moving and the distance between two rats were plotted in frequency distributions. In addition, behavior was manually annotated and related to the automatically obtained parameters for a validated interpretation. Inter-individual distance in combination with velocity of movement provided specific behavioral classes, such as moving with high velocity when "in contact" or "in proximity". Human observations showed that these classes coincide with following (chasing) behavior. In addition, when animals are "in contact", but at low velocity, behaviors such as allogrooming and social investigation were observed. Also, low dose treatment with morphine and short isolation increased the time animals spent in contact or in proximity at high velocity. Current methods that involve the investigation of social rat behavior are mostly limited to short and relatively simple manual observations. A new and automated method for analyzing social behavior in a social interaction test is presented here and shows to be sensitive to drug treatment and housing conditions known to influence social behavior in rats. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Develop Solid State Laser Sources for High Resolution Video Projection Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brickeen, B.K.

    2000-10-24

    Magic Lantern and Honeywell FM and T worked together to develop lower-cost, visible light solid-state laser sources to use in laser projector products. Work included a new family of video displays that use lasers as light sources. The displays would project electronic images up to 15 meters across and provide better resolution and clarity than movie film, up to five times the resolution of the best available computer monitors, up to 20 times the resolution of television, and up to six times the resolution of HDTV displays. The products that could be developed as a result of this CRADA could benefit the economy in many ways, such as: (1) Direct economic impact in the local manufacture and marketing of the units. (2) Direct economic impact in exports and foreign distribution. (3) Influencing the development of other elements of display technology that take advantage of the signals that these elements allow. (4) Increased productivity for engineers, FAA controllers, medical practitioners, and military operatives.

  3. Rate-Adaptive Video Compression (RAVC) Universal Video Stick (UVS)

    Science.gov (United States)

    Hench, David L.

    2009-05-01

    The H.264 video compression standard, aka MPEG 4 Part 10 aka Advanced Video Coding (AVC) allows new flexibility in the use of video in the battlefield. This standard necessitates encoder chips to effectively utilize the increased capabilities. Such chips are designed to cover the full range of the standard with designers of individual products given the capability of selecting the parameters that differentiate a broadcast system from a video conferencing system. The SmartCapture commercial product and the Universal Video Stick (UVS) military versions are about the size of a thumb drive with analog video input and USB (Universal Serial Bus) output and allow the user to select the parameters of imaging to the. Thereby, allowing the user to select video bandwidth (and video quality) using four dimensions of quality, on the fly, without stopping video transmission. The four dimensions are: 1) spatial, change from 720 pixel x 480 pixel to 320 pixel x 360 pixel to 160 pixel x 180 pixel, 2) temporal, change from 30 frames/ sec to 5 frames/sec, 3) transform quality with a 5 to 1 range, 4) and Group of Pictures (GOP) that affects noise immunity. The host processor simply wraps the H.264 network abstraction layer packets into the appropriate network packets. We also discuss the recently adopted scalable amendment to H.264 that will allow limit RAVC at any point in the communication chain by throwing away preselected packets.

  4. Proposal of a Video-recording System for the Assessment of Bell's Palsy: Methodology and Preliminary Results.

    Science.gov (United States)

    Monini, Simonetta; Marinozzi, Franco; Atturo, Francesca; Bini, Fabiano; Marchelletta, Silvia; Barbara, Maurizio

    2017-09-01

    To propose a new objective video-recording procedure to assess and monitor over time the severity of facial nerve palsy. No objective methods for facial palsy (FP) assessment are universally accepted. The face of subjects presenting with different degrees of facial nerve deficit, as measured by the House-Brackmann (HB) grading system, was videotaped after positioning, at specific points, 10 gray circular markers made of a retroreflective material. Video-recording included the resting position and six ordered facial movements. Editing and data elaboration was performed using a software instructed to assess marker distances. From the differences of the marker distances between the two sides was then extracted a score. The higher the FP degree, the higher the score registered during each movement. The statistical significance differed during the various movements between the different FP degrees, being uniform when closing the eyes gently; whereas when wrinkling the nose, there was no difference between the HB grade III and IV groups and, when smiling, no difference was evidenced between the HB grade IV and V groups.The global range index, which represents the overall degree of FP, was between 6.2 and 7.9 in the normal subjects (HB grade I); between 10.6 and 18.91 in HB grade II; between 22.19 and 33.06 in HB grade III; between 38.61 and 49.75 in HB grade IV; and between 50.97 and 66.88 in HB grade V. The proposed objective methodology could provide numerical data that correspond to the different degrees of FP, as assessed by the subjective HB grading system. These data can in addition be used singularly to score selected areas of the paralyzed face when recovery occurs with a different timing in the different face regions.

  5. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet er...... der dog opstået nye muligheder og udfordringer i forhold til at formidle og distribuere forskningsresultater til forskellige målgrupper via video. Samtidig er klassiske metodologiske problematikker som forskerens positionering i forhold til det undersøgte stadig aktuelle. Både klassiske og nye...... problemstillinger diskuteres i kapitlet, som rammesætter diskussionen ud fra forskellige positioneringsmuligheder: formidler, historiefortæller, eller dialogist. Disse positioner relaterer sig til genrer inden for ’akademisk video’. Afslutningsvis præsenteres en metodisk værktøjskasse med redskaber til planlægning...

  6. User aware video streaming

    Science.gov (United States)

    Kerofsky, Louis; Jagannath, Abhijith; Reznik, Yuriy

    2015-03-01

    We describe the design of a video streaming system using adaptation to viewing conditions to reduce the bitrate needed for delivery of video content. A visual model is used to determine sufficient resolution needed under various viewing conditions. Sensors on a mobile device estimate properties of the viewing conditions, particularly the distance to the viewer. We leverage the framework of existing adaptive bitrate streaming systems such as HLS, Smooth Streaming or MPEG-DASH. The client rate selection logic is modified to include a sufficient resolution computed using the visual model and the estimated viewing conditions. Our experiments demonstrate significant bitrate savings compare to conventional streaming methods which do not exploit viewing conditions.

  7. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...

  8. A comparison between flexible electrogoniometers, inclinometers and three-dimensional video analysis system for recording neck movement.

    Science.gov (United States)

    Carnaz, Letícia; Moriguchi, Cristiane S; de Oliveira, Ana Beatriz; Santiago, Paulo R P; Caurin, Glauco A P; Hansson, Gert-Åke; Coury, Helenice J C Gil

    2013-11-01

    This study compared neck range of movement recording using three different methods goniometers (EGM), inclinometers (INC) and a three-dimensional video analysis system (IMG) in simultaneous and synchronized data collection. Twelve females performed neck flexion-extension, lateral flexion, rotation and circumduction. The differences between EGM, INC, and IMG were calculated sample by sample. For flexion-extension movement, IMG underestimated the amplitude by 13%; moreover, EGM showed a crosstalk of about 20% for lateral flexion and rotation axes. In lateral flexion movement, all systems showed similar amplitude and the inter-system differences were moderate (4-7%). For rotation movement, EGM showed a high crosstalk (13%) for flexion-extension axis. During the circumduction movement, IMG underestimated the amplitude of flexion-extension movements by about 11%, and the inter-system differences were high (about 17%) except for INC-IMG regarding lateral flexion (7%) and EGM-INC regarding flexion-extension (10%). For application in workplace, INC presents good results compared to IMG and EGM though INC cannot record rotation. EGM should be improved in order to reduce its crosstalk errors and allow recording of the full neck range of movement. Due to non-optimal positioning of the cameras for recording flexion-extension, IMG underestimated the amplitude of these movements. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. The Boy Who Learned To Read Through Sustained Video Game Play: Considering Systemic Resistance To The Use Of New Texts In The Classroom

    Directory of Open Access Journals (Sweden)

    Rochelle SKOGEN

    2012-12-01

    Full Text Available Various studies have discussed the pedagogical potential of video game play in the classroom but resistance to such texts remains high. The study presented here discusses the case study of one young boy who, having failed to learn to read in the public school system was able to learn in a private Sudbury model school where video games were not only allowed but considered important learning tools. Findings suggest that the incorporation of such new texts in today’s public schools have the potential to motivate and enhance the learning of children.

  10. THE BOY WHO LEARNED TO READ THROUGH SUSTAINED VIDEO GAME PLAY: CONSIDERING SYSTEMIC RESISTANCE TO THE USE OF ‘NEW TEXTS’ IN THE CLASSROOM

    Directory of Open Access Journals (Sweden)

    Rochelle SKOGEN

    2012-05-01

    Full Text Available Various studies have discussed the pedagogical potential of video game play in the classroom but resistance to such texts remains high. The study presented here discusses the case study of one young boy who, having failed to learn to read in the public school system was able to learn in a private Sudbury model school where video games were not only allowed but considered important learning tools. Findings suggest that the incorporation of such new texts in today’s public schools have the potential to motivate and enhance the learning of children.

  11. Research on key technologies in multiview video and interactive multiview video streaming

    OpenAIRE

    Xiu, Xiaoyu

    2011-01-01

    Emerging video applications are being developed where multiple views of a scene are captured. Two central issues in the deployment of future multiview video (MVV) systems are compression efficiency and interactive video experience, which makes it necessary to develop advanced technologies on multiview video coding (MVC) and interactive multiview video streaming (IMVS). The former aims at efficient compression of all MVV data in a ratedistortion (RD) optimal manner by exploiting both temporal ...

  12. The Use of Video Modeling with the Picture Exchange Communication System to Increase Independent Communicative Initiations in Preschoolers with Autism and Developmental Delays

    Science.gov (United States)

    Cihak, David F.; Smith, Catherine C.; Cornett, Ashlee; Coleman, Mari Beth

    2012-01-01

    The use of video modeling (VM) procedures in conjunction with the picture exchange communication system (PECS) to increase independent communicative initiations in preschool-age students was evaluated in this study. The four participants were 3-year-old children with limited communication skills prior to the intervention. Two of the students had…

  13. [Assessment of learning activities using streaming video for laboratory practice education: aiming for development of E-learning system that promotes self-learning].

    Science.gov (United States)

    Takeda, Naohito; Takeuchi, Isao; Haruna, Mitsumasa

    2007-12-01

    In order to develop an e-learning system that promotes self-learning, lectures and basic operations in laboratory practice of chemistry were recorded and edited on DVD media, consisting of 8 streaming videos as learning materials. Twenty-six students wanted to watch the DVD, and answered the following questions after they had watched it: "Do you think the video would serve to encourage you to study independently in the laboratory practice?" Almost all students (95%) approved of its usefulness, and more than 60% of them watched the videos repeatedly in order to acquire deeper knowledge and skill of the experimental operations. More than 60% answered that the demonstration-experiment should be continued in the laboratory practice, in spite of distribution of the DVD media.

  14. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real W...

  15. A New Motion Capture System For Automated Gait Analysis Based On Multi Video Sequence Analysis

    DEFF Research Database (Denmark)

    Jensen, Karsten; Juhl, Jens

    There is an increasing demand for assessing foot mal positions and an interest in monitoring the effect of treatment. In the last decades several different motion capture systems has been used. This abstract describes a new low cost motion capture system.......There is an increasing demand for assessing foot mal positions and an interest in monitoring the effect of treatment. In the last decades several different motion capture systems has been used. This abstract describes a new low cost motion capture system....

  16. The Use of Video-Gaming Devices as a Motivation for Learning Embedded Systems Programming

    Science.gov (United States)

    Gonzalez, J.; Pomares, H.; Damas, M.; Garcia-Sanchez,P.; Rodriguez-Alvarez, M.; Palomares, J. M.

    2013-01-01

    As embedded systems are becoming prevalent in everyday life, many universities are incorporating embedded systems-related courses in their undergraduate curricula. However, it is not easy to motivate students in such courses since they conceive of embedded systems as bizarre computing elements, different from the personal computers with which they…

  17. Innovative Video Diagnostic Equipment for Material Science

    Science.gov (United States)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  18. Video Surveillance: Privacy Issues and Legal Compliance

    DEFF Research Database (Denmark)

    Mahmood Rajpoot, Qasim; Jensen, Christian D.

    2015-01-01

    Pervasive usage of video surveillance is rapidly increasing in developed countries. Continuous security threats to public safety demand use of such systems. Contemporary video surveillance systems offer advanced functionalities which threaten the privacy of those recorded in the video....... There is a need to balance the usage of video surveillance against its negative impact on privacy. This chapter aims to highlight the privacy issues in video surveillance and provides a model to help identify the privacy requirements in a video surveillance system. The authors make a step in the direction...... of investigating the existing legal infrastructure for ensuring privacy in video surveillance and suggest guidelines in order to help those who want to deploy video surveillance while least compromising the privacy of people and complying with legal infrastructure....

  19. Target recognition and scene interpretation in image/video understanding systems based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.

  20. Target recognition with image/video understanding systems based on active vision principle and network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a larger system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. This mechanism provides a reliable recognition if the target is occluded or cannot be recognized. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Logic of visual scenes can be captured in Network-Symbolic models and used for disambiguation of visual information. Network-Symbolic Transformations derive abstract structures, which allow for invariant recognition of an object as exemplar of a class. Active vision helps build consistent, unambiguous models. Such Image/Video Understanding Systems will be able reliably recognizing targets in real-world conditions.

  1. SE3000 Summer AY 14 Systems Engineering Colloquium, Total Ownership Cost Modeling [video

    OpenAIRE

    Madachy, Raymond J.

    2014-01-01

    Naval Postgraduate School Graduate School of Engineering & Applied Sciences, Total Ownership Cost Modeling presented by Raymond J. Madachy, Associate Professor of Systems Engineering at the Naval Postgraduate School. Total Ownership Cost (TOC) is the sum cost of system acquisition, development, and operations including direct and indirect costs. In the DoD, cost modeling is needed to enable tradespace analysis of affordability with other system ilities. Parametric cost models will be overv...

  2. Video monitoring of visible atmospheric emissions: from a manual device to a new fully automatic detection and classification device; Video surveillance des rejets atmospheriques d'un site siderurgique: d'un systeme manuel a la detection automatique

    Energy Technology Data Exchange (ETDEWEB)

    Bardet, I.; Ryckelynck, F.; Desmonts, T. [Sollac, 59 - Dunkerque (France)

    1999-11-01

    Complete text of publication follows: the context of strong local sensitivity to dust emissions from an integrated steel plant justifies the monitoring of the emissions of abnormally coloured smokes from this plant. In a first step, the watch is done 'visually' by screening and counting the puff emissions through a set of seven cameras and video recorders. The development of a new device making automatic picture analysis allowed to render the inspection automatic. The new system detects and counts the incidents and sends an alarm to the process operator. This way for automatic detection can be extended, after some tests, to other uses in the environmental field. (authors)

  3. P2P Video Streaming Strategies based on Scalable Video Coding

    Directory of Open Access Journals (Sweden)

    F.A. López-Fuentes

    2015-02-01

    Full Text Available Video streaming over the Internet has gained significant popularity during the last years, and the academy and industry have realized a great research effort in this direction. In this scenario, scalable video coding (SVC has emerged as an important video standard to provide more functionality to video transmission and storage applications. This paper proposes and evaluates two strategies based on scalable video coding for P2P video streaming services. In the first strategy, SVC is used to offer differentiated quality video to peers with heterogeneous capacities. The second strategy uses SVC to reach a homogeneous video quality between different videos from different sources. The obtained results show that our proposed strategies enable a system to improve its performance and introduce benefits such as differentiated quality of video for clients with heterogeneous capacities and variable network conditions.

  4. A Comparative Study of Da Vinci Robot System with Video-assisted Thoracoscopy in the Surgical Treatment of Mediastinal Lesions

    Directory of Open Access Journals (Sweden)

    Renquan DING

    2014-07-01

    Full Text Available Background and objective In recent years, Da Vinci robot system applied in the treatment of intrathoracic surgery mediastinal diseases become more mature. The aim of this study is to summarize the clinical data about mediastinal lesions of General Hospital of Shenyang Military Region in the past 4 years, then to analyze the treatment effect and promising applications of da Vinci robot system in the surgical treatment of mediastinal lesions. Methods 203 cases of mediastinal lesions were collected from General Hospital of Shenyang Military Region between 2010 and 2013. These patients were divided into two groups da Vinci and video-assisted thoracoscopic surgery (VATS according to the selection of the treatments. The time in surgery, intraoperative blood loss, postoperative drainage amount within three days after surgery, the period of bearing drainage tubes, hospital stays and hospitalization expense were then compared. Results All patients were successfully operated, the postoperative recovery is good and there is no perioperative death. The different of the time in surgery between two groups is Robots group 82 (20-320 min and thoracoscopic group 89 (35-360 min (P>0.05. The intraoperative blood loss between two groups is robot group 10 (1-100 mL and thoracoscopic group 50 (3-1,500 mL. The postoperative drainage amount within three days after surgery between two groups is robot group 215 (0-2,220 mL and thoracoscopic group 350 (50-1,810 mL. The period of bearing drainage tubes after surgery between two groups is robot group 3 (0-10 d and thoracoscopic group: 5 (1-18 d. The difference of hospital stays between two groups is robot group 7 (2-15 d and thoracoscopic group 9 (2-50 d. The hospitalization expense between two groups is robot group (18,983.6±4,461.2 RMB and thoracoscopic group (9,351.9±2,076.3 RMB (All P<0.001. Conclusion The da Vinci robot system is safe and efficient in the treatment of mediastinal lesions compared with video

  5. Video Content Foraging

    NARCIS (Netherlands)

    van Houten, Ynze; Schuurman, Jan Gerrit; Verhagen, Pleunes Willem; Enser, Peter; Kompatsiaris, Yiannis; O’Connor, Noel E.; Smeaton, Alan F.; Smeulders, Arnold W.M.

    2004-01-01

    With information systems, the real design problem is not increased access to information, but greater efficiency in finding useful information. In our approach to video content browsing, we try to match the browsing environment with human information processing structures by applying ideas from

  6. Characteristics of Instructional Videos

    Science.gov (United States)

    Beheshti, Mobina; Taspolat, Ata; Kaya, Omer Sami; Sapanca, Hamza Fatih

    2018-01-01

    Nowadays, video plays a significant role in education in terms of its integration into traditional classes, the principal delivery system of information in classes particularly in online courses as well as serving as a foundation of many blended classes. Hence, education is adopting a modern approach of instruction with the target of moving away…

  7. Systemic innovation and the virtues of going virtual : The case of the digital video disc

    NARCIS (Netherlands)

    De Laat, PB

    According to David Teece, only strong and integrated firms can successfully innovate in a systemic fashion. Looser coalitions consisting of joint ventures, alliances, or virtual partners will not be able to create a systemic innovation, let alone to set standards for it, or to control its further

  8. Maryland Public School Standards for Telecommunications Distribution Systems: Infrastructure Design for Voice, Video, and Data Communications.

    Science.gov (United States)

    Maryland State Dept. of Education, Baltimore. School Facilities Branch.

    Telecommunications infrastructure has the dual challenges of maintaining quality while accommodating change, issues that have long been met through a series of implementation standards. This document is designed to ensure that telecommunications systems within the Maryland public school system are also capable of meeting both challenges and…

  9. Novel Airborne Video Sensors. Super-Resolution Multi-Camera Panoramic Imaging System for UAVs

    National Research Council Canada - National Science Library

    Negahdaripour, Shahriar

    2004-01-01

    ... by computer simulations, with/without supplementary gyro and GPS. How various system parameters impact the achievable precision of panoramic system in 3-D terrain feature localization and UAV motion estimation is determined for the A=0.5-2 KM...

  10. Monitoring of Structures and Mechanical Systems Using Virtual Visual Sensors for Video Analysis: Fundamental Concept and Proof of Feasibility

    Directory of Open Access Journals (Sweden)

    Thomas Schumacher

    2013-12-01

    Full Text Available Structural health monitoring (SHM has become a viable tool to provide owners of structures and mechanical systems with quantitative and objective data for maintenance and repair. Traditionally, discrete contact sensors such as strain gages or accelerometers have been used for SHM. However, distributed remote sensors could be advantageous since they don’t require cabling and can cover an area rather than a limited number of discrete points. Along this line we propose a novel monitoring methodology based on video analysis. By employing commercially available digital cameras combined with efficient signal processing methods we can measure and compute the fundamental frequency of vibration of structural systems. The basic concept is that small changes in the intensity value of a monitored pixel with fixed coordinates caused by the vibration of structures can be captured by employing techniques such as the Fast Fourier Transform (FFT. In this paper we introduce the basic concept and mathematical theory of this proposed so-called virtual visual sensor (VVS, we present a set of initial laboratory experiments to demonstrate the accuracy of this approach, and provide a practical in-service monitoring example of an in-service bridge. Finally, we discuss further work to improve the current methodology.

  11. A Master-Slave Surveillance System to Acquire Panoramic and Multiscale Videos

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2014-01-01

    Full Text Available This paper describes a master-slave visual surveillance system that uses stationary-dynamic camera assemblies to achieve wide field of view and selective focus of interest. In this system, the fish-eye panoramic camera is capable of monitoring a large area, and the PTZ dome camera has high mobility and zoom ability. In order to achieve the precise interaction, preprocessing spatial calibration between these two cameras is required. This paper introduces a novel calibration approach to automatically calculate a transformation matrix model between two coordinate systems by matching feature points. In addition, a distortion correction method based on Midpoint Circle Algorithm is proposed to handle obvious horizontal distortion in the captured panoramic image. Experimental results using realistic scenes have demonstrated the efficiency and applicability of the system with real-time surveillance.

  12. Video Moving Target Indication in the Analysts’ Detection Support System

    Science.gov (United States)

    2006-04-01

    implementation. The system currently has a bug in that there is no synchronisation between the input frames and the tracked objects reported for each...frame (due to a bug in the third party MPEG decoder). It was therefore necessary to synchronise the reporting with the input frames by hand, and this...algorithms for our VMTI system. References 1. S. Ali and M. Shah. COCOA - tracking in aerial imagery. Proc. Int. Conf. on Computer Vision, Beijing, China

  13. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition...

  14. Video Analytics

    DEFF Research Database (Denmark)

    include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...

  15. Fast Aerial Video Stitching

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-10-01

    Full Text Available The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences. In this paper, we present a novel system which has an unique ability to stitch high-frame rate aerial video at a speed of 150 frames per second (FPS. In addition, rather than using a high-speed vision platform such as FPGA or CUDA, our system is running on a normal personal computer. To achieve this, after the careful comparison of the existing invariant features, we choose the FAST corner and binary descriptor for efficient feature extraction and representation, and present a spatial and temporal coherent filter to fuse the UAV motion information into the feature matching. The proposed filter can remove the majority of feature correspondence outliers and significantly increase the speed of robust feature matching by up to 20 times. To achieve a balance between robustness and efficiency, a dynamic key frame-based stitching framework is used to reduce the accumulation errors. Extensive experiments on challenging UAV datasets demonstrate that our approach can break through the speed limitation and generate an accurate stitching image for aerial video stitching tasks.

  16. Video Streaming in the Wild West

    OpenAIRE

    Helen Gail Prosser

    2006-01-01

    Northern Lakes College in north-central Alberta is the first post-secondary institution in Canada to use the Media on Demand digital video system to stream large video files between dispersed locations (Karlsen). Staff and students at distant locations of Northern Lakes College are now viewing more than 350 videos using video streaming technology. This has been made possible by SuperNet, a high capacity broadband network that connects schools, hospitals, libraries and government offices thr...

  17. A New Learning Control System for Basketball Free Throws Based on Real Time Video Image Processing and Biofeedback

    Directory of Open Access Journals (Sweden)

    R. Sarang

    2018-02-01

    Full Text Available Shooting free throws plays an important role in basketball. The major problem in performing a correct free throw seems to be inappropriate training. Training is performed offline and it is often not that persistent. The aim of this paper is to consciously modify and control the free throw using biofeedback. Elbow and shoulder dynamics are calculated by an image processing technique equipped with a video image acquisition system. The proposed setup in this paper, named learning control system, is able to quantify and provide feedback of the above parameters in real time as audio signals. Therefore, it yielded to performing a correct learning and conscious control of shooting. Experimental results showed improvements in the free throw shooting style including shot pocket and locked position. The mean values of elbow and shoulder angles were controlled approximately on 89o and 26o, for shot pocket and also these angles were tuned approximately on 180o and 47o respectively for the locked position (closed to the desired pattern of the free throw based on valid FIBA references. Not only the mean values enhanced but also the standard deviations of these angles decreased meaningfully, which shows shooting style convergence and uniformity. Also, in training conditions, the average percentage of making successful free throws increased from about 64% to even 87% after using this setup and in competition conditions the average percentage of successful free throws enhanced about 20%, although using the learning control system may not be the only reason for these outcomes. The proposed system is easy to use, inexpensive, portable and real time applicable.

  18. Reductions in the variations of respiration signals for respiratory-gated radiotherapy when using the video-coaching respiration guiding system

    Science.gov (United States)

    Lee, Hyun Jeong; Yea, Ji Woon; Oh, Se An

    2015-07-01

    Respiratory-gated radiation therapy (RGRT) has been used to minimize the dose to normal tissue in lung-cancer radiotherapy. The present research aims to improve the regularity of respiration in RGRT by using a video-coached respiration guiding system. In the study, 16 patients with lung cancer were evaluated. The respiration signals of the patients were measured by using a realtime position management (RPM) respiratory gating system (Varian, USA), and the patients were trained using the video-coaching respiration guiding system. The patients performed free breathing and guided breathing, and the respiratory cycles were acquired for ~5 min. Then, Microsoft Excel 2010 software was used to calculate the mean and the standard deviation for each phase. The standard deviation was computed in order to analyze the improvement in the respiratory regularity with respect to the period and the displacement. The standard deviation of the guided breathing decreased to 48.8% in the inhale peak and 24.2% in the exhale peak compared with the values for the free breathing of patient 6. The standard deviation of the respiratory cycle was found to be decreased when using the respiratory guiding system. The respiratory regularity was significantly improved when using the video-coaching respiration guiding system. Therefore, the system is useful for improving the accuracy and the efficiency of RGRT.

  19. An Interdisciplinary Approach to Concept Development for Interactive Video Communication Systems.

    Science.gov (United States)

    Roth, Susan King

    In winter of 1993, a design research project was conducted in the Department of Interior Design at Ohio State University by interdisciplinary teams of graduate students from Industrial Design, Industrial Systems and Engineering, Marketing, and Communication. It was, in effect, a course which aimed to apply knowledge from the students' diverse…

  20. Image/video understanding systems based on network-symbolic models and active vision

    Science.gov (United States)

    Kuvich, Gary

    2004-07-01

    Vision is a part of information system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. It is hard to split the entire system apart, and vision mechanisms cannot be completely understood separately from informational processes related to knowledge and intelligence. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Vision is a component of situation awareness, motion and planning systems. Foveal vision provides semantic analysis, recognizing objects in the scene. Peripheral vision guides fovea to salient objects and provides scene context. Biologically inspired Network-Symbolic representation, in which both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, converts visual information into relational Network-Symbolic structures, avoiding precise artificial computations of 3-D models. Network-Symbolic transformations derive more abstract structures that allows for invariant recognition of an object as exemplar of a class and for a reliable identification even if the object is occluded. Systems with such smart vision will be able to navigate in real environment and understand real-world situations.

  1. Active vision and image/video understanding systems for UGV based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-09-01

    Vision evolved as a sensory system for reaching, grasping and other motion activities. In advanced creatures, it has become a vital component of situation awareness, navigation and planning systems. Vision is part of a larger information system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. It is hard to split such a system apart. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for natural processing of visual information. It converts visual information into relational Network-Symbolic models, avoiding artificial precise computations of 3-dimensional models. Logic of visual scenes can be captured in such models and used for disambiguation of visual information. Network-Symbolic transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps create unambiguous network-symbolic models. This approach is consistent with NIST RCS. The UGV, equipped with such smart vision, will be able to plan path and navigate in a real environment, perceive and understand complex real-world situations and act accordingly.

  2. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  3. Detection of patient movement during CBCT examination using video observation compared with an accelerometer-gyroscope tracking system.

    Science.gov (United States)

    Spin-Neto, Rubens; Matzen, Louise H; Schropp, Lars; Gotfredsen, Erik; Wenzel, Ann

    2017-02-01

    To compare video observation (VO) with a novel three-dimensional registration method, based on an accelerometer-gyroscope (AG) system, to detect patient movement during CBCT examination. The movements were further analyzed according to complexity and patient age. In 181 patients (118 females/63 males; age average 30 years, range: 9-84 years), 206 CBCT examinations were performed, which were video-recorded during examination. An AG was, at the same time, attached to the patient head to track head position in three dimensions. Three observers scored patient movement (yes/no) by VO. AG provided movement data on the x-, y- and z-axes. Thresholds for AG-based registration were defined at 0.5, 1, 2, 3 and 4 mm (movement distance). Movement detected by VO was compared with that registered by AG, according to movement complexity (uniplanar vs multiplanar, as defined by AG) and patient age (≤15, 16-30 and ≥31 years). According to AG, movement ≥0.5 mm was present in 160 (77.7%) examinations. According to VO, movement was present in 46 (22.3%) examinations. One VO-detected movement was not registered by AG. Overall, VO did not detect 71.9% of the movements registered by AG at the 0.5-mm threshold. At a movement distance ≥4 mm, 20% of the AG-registered movements were not detected by VO. Multiplanar movements such as lateral head rotation (72.1%) and nodding/swallowing (52.6%) were more often detected by VO in comparison with uniplanar movements, such as head lifting (33.6%) and anteroposterior translation (35.6%), at the 0.5-mm threshold. The prevalence of patients who move was highest in patients younger than 16 years (64.3% for VO and 92.3% for AG-based registration at the 0.5-mm threshold). AG-based movement registration resulted in a higher prevalence of patient movement during CBCT examination than VO-based registration. Also, AG-registered multiplanar movements were more frequently detected by VO than uniplanar movements. The prevalence of patients who move

  4. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

    Directory of Open Access Journals (Sweden)

    Novita Astin

    2016-12-01

    Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

  5. Ventilator Data Extraction with a Video Display Image Capture and Processing System.

    Science.gov (United States)

    Wax, David B; Hill, Bryan; Levin, Matthew A

    2017-06-01

    Medical hardware and software device interoperability standards are not uniform. The result of this lack of standardization is that information available on clinical devices may not be readily or freely available for import into other systems for research, decision support, or other purposes. We developed a novel system to import discrete data from an anesthesia machine ventilator by capturing images of the graphical display screen and using image processing to extract the data with off-the-shelf hardware and open-source software. We were able to successfully capture and verify live ventilator data from anesthesia machines in multiple operating rooms and store the discrete data in a relational database at a substantially lower cost than vendor-sourced solutions.

  6. A video-based real-time adaptive vehicle-counting system for urban roads.

    Directory of Open Access Journals (Sweden)

    Fei Liu

    Full Text Available In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios.

  7. Validation of Prototype Continuous Real Time Vital Signs Video Analytics Monitoring System CCATT Viewer

    Science.gov (United States)

    2018-01-26

    specifications, or other data does not license the holder or any other person or corporation or convey any rights or permission to manufacture , use , or sell...NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data included in this document for any purpose other than...0730, 20 Feb 2018. 14. ABSTRACT We have developed an automated physiological data-organizing and information-summary system (Critical Care Air

  8. Comparative study using video analysis and an ultrasonic measurement system to quantify mandibular movement.

    Science.gov (United States)

    Frisoli, M; Edelhoff, J M; Gersdorff, N; Nicolet, J; Braidot, A; Engelke, W

    2017-01-01

    This study provides a direct comparison between two registration systems used in quantifying mandibular opening movements: two-dimensional videography and electronic axiography, which is used as a reference. A total of 32 volunteers (age: 27.2 ± 6.8 - gender: 17 F - 15 M) participated in the study and repeated a characteristic movement, the frontal Posselt, used in the clinical evaluation of the temporomandibular joint. Frontal Posselt diagrams were reconstructed with the data gathered from both systems, which yielded acceptably similar data. Three commonly assessed parameters were obtained from each diagram and compared. These parameters were: maximum opening, right laterotrusion and left laterotrusion. Both descriptive statistics and the ANOVA test suggested that there was no significant difference between the estimated maximum opening parameter and the reference system (p = 0.217, 95% confidence). Laterotrusion values, on the other hand, appear to be overestimated by videography system and to show greater variability. Two-dimensional videography appears to be a suitable tool with resolution that is adequate for tracing mandibular movements - and opening values, in particular - for screening purposes, long-term observation, and as a quick check for dysfunction as far as frontal plane trajectories are concerned. Reliability and acceptable quality of 2D videography data, acquired in this work, show that it has clear advantages for its wide application in the dental office due to simplicity and low cost for maximum opening measurement given the usefulness of this parameter in the detection of temporomandibular disorders.

  9. A hardware/software simulation for the video tracking system of the Kuiper Airborne Observatory telescope

    Science.gov (United States)

    Boozer, G. A.; Mckibbin, D. D.; Haas, M. R.; Erickson, E. F.

    1984-01-01

    This simulator was created so that C-141 Kuiper Airborne Observatory investigators could test their Airborne Data Acquisition and Management System software on a system which is generally more accessible than the ADAMS on the plane. An investigator can currently test most of his data acquisition program using the data computer simulator in the Cave. (The Cave refers to the ground-based computer facilities for the KAO and the associated support personnel.) The main Cave computer is interfaced to the data computer simulator in order to simulate the data-Exec computer communications. However until now, there has been no way to test the data computer interface to the tracker. The simulator described here simulates both the KAO Exec and tracker computers with software which runs on the same Hewlett-Packard (HP) computer as the investigator's data acquisition program. A simulator control box is hardwired to the computer to provide monitoring of tracker functions, to provide an operator panel similar to the real tracker, and to simulate the 180 deg phase shifting of the chopper squre-wave reference with beam switching. If run in the Cave, one can use their Exec simulator and this tracker simulator.

  10. High-Speed Video System for Micro-Expression Detection and Recognition

    Directory of Open Access Journals (Sweden)

    Diana Borza

    2017-12-01

    Full Text Available Micro-expressions play an essential part in understanding non-verbal communication and deceit detection. They are involuntary, brief facial movements that are shown when a person is trying to conceal something. Automatic analysis of micro-expression is challenging due to their low amplitude and to their short duration (they occur as fast as 1/15 to 1/25 of a second. We propose a fully micro-expression analysis system consisting of a high-speed image acquisition setup and a software framework which can detect the frames when the micro-expressions occurred as well as determine the type of the emerged expression. The detection and classification methods use fast and simple motion descriptors based on absolute image differences. The recognition module it only involves the computation of several 2D Gaussian probabilities. The software framework was tested on two publicly available high speed micro-expression databases and the whole system was used to acquire new data. The experiments we performed show that our solution outperforms state of the art works which use more complex and computationally intensive descriptors.

  11. A Pedestrian Navigation System Using Cellular Phone Video-Conferencing Functions

    Directory of Open Access Journals (Sweden)

    Akihiko Sugiura

    2012-01-01

    Full Text Available A user’s position-specific field has been developed using the Global Positioning System (GPS technology. To determine the position using cellular phones, a device was developed, in which a pedestrian navigation unit carries the GPS. However, GPS cannot specify a position in a subterranean environment or indoors, which is beyond the reach of transmitted signals. In addition, the position-specification precision of GPS, that is, its resolution, is on the order of several meters, which is deemed insufficient for pedestrians. In this study, we proposed and evaluated a technique for locating a user’s 3D position by setting up a marker in the navigation space detected in the image of a cellular phone. By experiment, we verified the effectiveness and accuracy of the proposed method. Additionally, we improved the positional precision because we measured the position distance using numerous markers.

  12. A new approach to obtain metric data from video surveillance: Preliminary evaluation of a low-cost stereo-photogrammetric system.

    Science.gov (United States)

    Russo, Paolo; Gualdi-Russo, Emanuela; Pellegrinelli, Alberto; Balboni, Juri; Furini, Alessio

    2017-02-01

    Using an interdisciplinary approach the authors demonstrate the possibility to obtain reliable anthropometric data of a subject by means of a new video surveillance system. In general the use of current video surveillance systems provides law enforcement with useful data to solve many crimes. Unfortunately the quality of the images and the way in which they are taken often makes it very difficult to judge the compatibility between suspect and perpetrator. In this paper, the authors present the results obtained with a low-cost photogrammetric video surveillance system based on a pair of common surveillance cameras synchronized with each other. The innovative aspect of the system is that it allows estimation with considerable accuracy not only of body height (error 0.1-3.1cm, SD 1.8-4.5cm) but also of other anthropometric characters of the subject, consequently with better determination of the biological profile and greatly increased effectiveness of the judgment of compatibility. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Construction and operation of a high-speed, high-precision eye tracker for tight stimulus synchronization and real-time gaze monitoring in human and animal subjects

    Directory of Open Access Journals (Sweden)

    Reza Farivar

    2016-09-01

    Full Text Available Measurements of the fast and precise movements of the eye—critical to many vision, oculomotor, and animal behaviour studies—can be made non-invasively by video oculography. The protocol here describes the construction and operation of a research-grade video oculography system with 0.1° precision over the full typical viewing range at over 450Hz with tight synchronization with stimulus onset. The protocol consists of three stages: (1 system assembly, (2 calibration for both cooperative and for minimally cooperative subjects (e.g., animals or infants, and (3 gaze monitoring and recording.

  14. Are the effects of Unreal violent video games pronounced when playing with a virtual reality system?

    Science.gov (United States)

    Arriaga, Patrícia; Esteves, Francisco; Carneiro, Paula; Monteiro, Maria Benedicta

    2008-01-01

    This study was conducted to analyze the short-term effects of violent electronic games, played with or without a virtual reality (VR) device, on the instigation of aggressive behavior. Physiological arousal (heart rate (HR)), priming of aggressive thoughts, and state hostility were also measured to test their possible mediation on the relationship between playing the violent game (VG) and aggression. The participants--148 undergraduate students--were randomly assigned to four treatment conditions: two groups played a violent computer game (Unreal Tournament), and the other two a non-violent game (Motocross Madness), half with a VR device and the remaining participants on the computer screen. In order to assess the game effects the following instruments were used: a BIOPAC System MP100 to measure HR, an Emotional Stroop task to analyze the priming of aggressive and fear thoughts, a self-report State Hostility Scale to measure hostility, and a competitive reaction-time task to assess aggressive behavior. The main results indicated that the violent computer game had effects on state hostility and aggression. Although no significant mediation effect could be detected, regression analyses showed an indirect effect of state hostility between playing a VG and aggression. Copyright 2008 Wiley-Liss, Inc.

  15. An embedded system for face classification in infrared video using sparse representation

    Science.gov (United States)

    Saavedra M., Antonio; Pezoa, Jorge E.; Zarkesh-Ha, Payman; Figueroa, Miguel

    2017-09-01

    We propose a platform for robust face recognition in Infrared (IR) images using Compressive Sensing (CS). In line with CS theory, the classification problem is solved using a sparse representation framework, where test images are modeled by means of a linear combination of the training set. Because the training set constitutes an over-complete dictionary, we identify new images by finding their sparsest representation based on the training set, using standard l1-minimization algorithms. Unlike conventional face-recognition algorithms, we feature extraction is performed using random projections with a precomputed binary matrix, as proposed in the CS literature. This random sampling reduces the effects of noise and occlusions such as facial hair, eyeglasses, and disguises, which are notoriously challenging in IR images. Thus, the performance of our framework is robust to these noise and occlusion factors, achieving an average accuracy of approximately 90% when the UCHThermalFace database is used for training and testing purposes. We implemented our framework on a high-performance embedded digital system, where the computation of the sparse representation of IR images was performed by a dedicated hardware using a deeply pipelined architecture on an Field-Programmable Gate Array (FPGA).

  16. Wii, Kinect, and Move. Heart Rate, Oxygen Consumption, Energy Expenditure, and Ventilation due to Different Physically Active Video Game Systems in College Students

    OpenAIRE

    SCHEER, KRISTA S.; SIEBRANT, SARAH M.; Brown, Gregory A.; Brandon S. Shaw; Shaw, Ina

    2014-01-01

    Nintendo Wii, Sony Playstation Move, and Microsoft XBOX Kinect are home video gaming systems that involve player movement to control on-screen game play. Numerous investigations have demonstrated that playing Wii is moderate physical activity at best, but Move and Kinect have not been as thoroughly investigated. The purpose of this study was to compare heart rate, oxygen consumption, and ventilation while playing the games Wii Boxing, Kinect Boxing, and Move Gladiatorial Combat. Heart rate, o...

  17. 4K Video-Laryngoscopy and Video-Stroboscopy: Preliminary Findings.

    Science.gov (United States)

    Woo, Peak

    2016-01-01

    4K video is a new format. At 3840 × 2160 resolution, it has 4 times the resolution of standard 1080 high definition (HD) video. Magnification can be done without loss of resolution. This study uses 4K video for video-stroboscopy. Forty-six patients were examined by conventional video-stroboscopy (digital 3 chip CCD) and compared with 4K video-stroboscopy. The video was recorded on a Blackmagic 4K cinema production camera in CinemaDNG RAW format. The video was played back on a 4K monitor and compared to standard video. Pathological conditions included: polyps, scar, cysts, cancer, sulcus, and nodules. Successful 4K video recordings were achieved in all subjects using a 70° rigid endoscope. The camera system is bulky. The examination is performed similarly to standard video-stroboscopy. Playback requires a 4K monitor. As expected, the images were far clearer in detail than standard video. Stroboscopy video using the 4K camera was consistently able to show more detail. Two patients had diagnosis change after 4K viewing. 4K video is an exciting new technology that can be applied to laryngoscopy. It allows for cinematic 4K quality recordings. Both continuous and stroboscopic light can be used for visualization. Its clinical utility is feasible, but usefulness must be proven. © The Author(s) 2015.

  18. Telemetry and Communication IP Video Player

    Science.gov (United States)

    OFarrell, Zachary L.

    2011-01-01

    Aegis Video Player is the name of the video over IP system for the Telemetry and Communications group of the Launch Services Program. Aegis' purpose is to display video streamed over a network connection to be viewed during launches. To accomplish this task, a VLC ActiveX plug-in was used in C# to provide the basic capabilities of video streaming. The program was then customized to be used during launches. The VLC plug-in can be configured programmatically to display a single stream, but for this project multiple streams needed to be accessed. To accomplish this, an easy to use, informative menu system was added to the program to enable users to quickly switch between videos. Other features were added to make the player more useful, such as watching multiple videos and watching a video in full screen.

  19. Good clean fun? A content analysis of profanity in video games and its prevalence across game systems and ratings.

    Science.gov (United States)

    Ivory, James D; Williams, Dmitri; Martins, Nicole; Consalvo, Mia

    2009-08-01

    Although violent video game content and its effects have been examined extensively by empirical research, verbal aggression in the form of profanity has received less attention. Building on preliminary findings from previous studies, an extensive content analysis of profanity in video games was conducted using a sample of the 150 top-selling video games across all popular game platforms (including home consoles, portable consoles, and personal computers). The frequency of profanity, both in general and across three profanity categories, was measured and compared to games' ratings, sales, and platforms. Generally, profanity was found in about one in five games and appeared primarily in games rated for teenagers or above. Games containing profanity, however, tended to contain it frequently. Profanity was not found to be related to games' sales or platforms.

  20. Video streaming in the Wild West

    Directory of Open Access Journals (Sweden)

    Helen Gail Prosser

    2006-11-01

    Full Text Available Northern Lakes College in north-central Alberta is the first post-secondary institution in Canada to use the Media on Demand digital video system to stream large video files between dispersed locations (Karlsen. Staff and students at distant locations of Northern Lakes College are now viewing more than 350 videos using video streaming technology. This has been made possible by SuperNet, a high capacity broadband network that connects schools, hospitals, libraries and government offices throughout the province of Alberta (Alberta SuperNet. This article describes the technical process of implementing video streaming at Northern Lakes College from March 2005 until March 2006.

  1. Comparative population assessments of Nautilus sp. in the Philippines, Australia, Fiji, and American Samoa using baited remote underwater video systems.

    Directory of Open Access Journals (Sweden)

    Gregory J Barord

    Full Text Available The extant species of Nautilus and Allonautilus (Cephalopoda inhabit fore-reef slope environments across a large geographic area of the tropical western Pacific and eastern Indian Oceans. While many aspects of their biology and behavior are now well-documented, uncertainties concerning their current populations and ecological role in the deeper, fore-reef slope environments remain. Given the historical to current day presence of nautilus fisheries at various locales across the Pacific and Indian Oceans, a comparative assessment of the current state of nautilus populations is critical to determine whether conservation measures are warranted. We used baited remote underwater video systems (BRUVS to make quantitative photographic records as a means of estimating population abundance of Nautilus sp. at sites in the Philippine Islands, American Samoa, Fiji, and along an approximately 125 km transect on the fore reef slope of the Great Barrier Reef from east of Cairns to east of Lizard Island, Australia. Each site was selected based on its geography, historical abundance, and the presence (Philippines or absence (other sites of Nautilus fisheries The results from these observations indicate that there are significantly fewer nautiluses observable with this method in the Philippine Islands site. While there may be multiple possibilities for this difference, the most parsimonious is that the Philippine Islands population has been reduced due to fishing. When compared to historical trap records from the same site the data suggest there have been far more nautiluses at this site in the past. The BRUVS proved to be a valuable tool to measure Nautilus abundance in the deep sea (300-400 m while reducing our overall footprint on the environment.

  2. Semiologic classification of psychogenic non epileptic seizures (PNES) based on video EEG analysis: do we need new classification systems?

    Science.gov (United States)

    Wadwekar, Vaibhav; Nair, Pradeep Pankajakshan; Murgai, Aditya; Thirunavukkarasu, Sibi; Thazhath, Harichandrakumar Kottyen

    2014-03-01

    Different studies have described useful signs to diagnose psychogenic non-epileptic seizure (PNES). A few authors have tried to describe the semiologic groups among PNES patients; each group consisting of combination of features. But there is no uniformity of nomenclature among these studies. Our aim was to find out whether the objective classification system proposed by Hubsch et al. was useful and adequate to classify PNES patient population from South India. We retrospectively analyzed medical records and video EEG monitoring data of patients, recorded during 3 year period from June 2010 to July 2013. We observed the semiologic features of each PNES episode and tried to group them strictly adhering to Hubsch et al. classification. Minor modifications were made to include patients who were left unclassified. A total of 65 patients were diagnosed to have PNES during this period, out of which 11 patients were excluded due to inadequate data. We could classify 42(77.77%) patients without modifying the defining criteria of the Hubsch et al. groups. With minor modification we could classify 94.96% patients. The modified groups with patient distribution are as follows: Class 1--dystonic attacks with primitive gestural activities [3(5.6%)]. Class 2 – paucikinetic attacks with or without preserved responsiveness [5(9.3%)]. Class 3--pseudosyncope with or without hyperventilation [21(38.9%)]. Class 4--hyperkinetic prolonged attacks with hyperventilation, involvement of limbs and/or trunk [14(25.9%)]. Class 5--axial dystonic attacks [8(14.8%)]. Class 6--unclassified type [3(5.6%)]. This study demonstrates that the Hubsch's classification with minor modifications is useful and adequate to classify PNES patients from South India. Copyright © 2013 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.

  3. An integrated multispectral video and environmental monitoring system for the study of coastal processes and the support of beach management operations

    Science.gov (United States)

    Ghionis, George; Trygonis, Vassilis; Karydis, Antonis; Vousdoukas, Michalis; Alexandrakis, George; Drakopoulos, Panos; Amdreadis, Olympos; Psarros, Fotis; Velegrakis, Antonis; Poulos, Serafim

    2016-04-01

    Effective beach management requires environmental assessments that are based on sound science, are cost-effective and are available to beach users and managers in an accessible, timely and transparent manner. The most common problems are: 1) The available field data are scarce and of sub-optimal spatio-temporal resolution and coverage, 2) our understanding of local beach processes needs to be improved in order to accurately model/forecast beach dynamics under a changing climate, and 3) the information provided by coastal scientists/engineers in the form of data, models and scientific interpretation is often too complicated to be of direct use by coastal managers/decision makers. A multispectral video system has been developed, consisting of one or more video cameras operating in the visible part of the spectrum, a passive near-infrared (NIR) camera, an active NIR camera system, a thermal infrared camera and a spherical video camera, coupled with innovative image processing algorithms and a telemetric system for the monitoring of coastal environmental parameters. The complete system has the capability to record, process and communicate (in quasi-real time) high frequency information on shoreline position, wave breaking zones, wave run-up, erosion hot spots along the shoreline, nearshore wave height, turbidity, underwater visibility, wind speed and direction, air and sea temperature, solar radiation, UV radiation, relative humidity, barometric pressure and rainfall. An innovative, remotely-controlled interactive visual monitoring system, based on the spherical video camera (with 360°field of view), combines the video streams from all cameras and can be used by beach managers to monitor (in real time) beach user numbers, flow activities and safety at beaches of high touristic value. The high resolution near infrared cameras permit 24-hour monitoring of beach processes, while the thermal camera provides information on beach sediment temperature and moisture, can

  4. A Prototyping Virtual Socket System-On-Platform Architecture with a Novel ACQPPS Motion Estimator for H.264 Video Encoding Applications

    Directory of Open Access Journals (Sweden)

    Qiu Yifeng

    2009-01-01

    Full Text Available H.264 delivers the streaming video in high quality for various applications. The coding tools involved in H.264, however, make its video codec implementation very complicated, raising the need for algorithm optimization, and hardware acceleration. In this paper, a novel adaptive crossed quarter polar pattern search (ACQPPS algorithm is proposed to realize an enhanced inter prediction for H.264. Moreover, an efficient prototyping system-on-platform architecture is also presented, which can be utilized for a realization of H.264 baseline profile encoder with the support of integrated ACQPPS motion estimator and related video IP accelerators. The implementation results show that ACQPPS motion estimator can achieve very high estimated image quality comparable to that from the full search method, in terms of peak signal-to-noise ratio (PSNR, while keeping the complexity at an extremely low level. With the integrated IP accelerators and optimized techniques, the proposed system-on-platform architecture sufficiently supports the H.264 real-time encoding with the low cost.

  5. Video signal processing system uses gated current mode switches to perform high speed multiplication and digital-to-analog conversion

    Science.gov (United States)

    Gilliland, M. G.; Rougelot, R. S.; Schumaker, R. A.

    1966-01-01

    Video signal processor uses special-purpose integrated circuits with nonsaturating current mode switching to accept texture and color information from a digital computer in a visual spaceflight simulator and to combine these, for display on color CRT with analog information concerning fading.

  6. [A comparative study of Da Vinci robot system with video-assisted thoracoscopy in the surgical treatment of mediastinal lesions].

    Science.gov (United States)

    Ding, Renquan; Tong, Xiangdong; Xu, Shiguang; Zhang, Dakun; Gao, Xin; Teng, Hong; Qu, Jiaqi; Wang, Shumin

    2014-07-20

    In recent years, Da Vinci robot system applied in the treatment of intrathoracic surgery mediastinal diseases become more mature. The aim of this study is to summarize the clinical data about mediastinal lesions of General Hospital of Shenyang Military Region in the past 4 years, then to analyze the treatment effect and promising applications of da Vinci robot system in the surgical treatment of mediastinal lesions. 203 cases of mediastinal lesions were collected from General Hospital of Shenyang Military Region between 2010 and 2013. These patients were divided into two groups da Vinci and video-assisted thoracoscopic surgery (VATS) according to the selection of the treatments. The time in surgery, intraoperative blood loss, postoperative drainage amount within three days after surgery, the period of bearing drainage tubes, hospital stays and hospitalization expense were then compared. All patients were successfully operated, the postoperative recovery is good and there is no perioperative death. The different of the time in surgery between two groups is Robots group 82 (20-320) min and thoracoscopic group 89 (35-360) min (P>0.05). The intraoperative blood loss between two groups is robot group 10 (1-100) mL and thoracoscopic group 50 (3-1,500) mL. The postoperative drainage amount within three days after surgery between two groups is robot group 215 (0-2,220) mL and thoracoscopic group 350 (50-1,810) mL. The period of bearing drainage tubes after surgery between two groups is robot group 3 (0-10) d and thoracoscopic group: 5 (1-18) d. The difference of hospital stays between two groups is robot group 7 (2-15) d and thoracoscopic group 9 (2-50) d. The hospitalization expense between two groups is robot group (18,983.6±4,461.2) RMB and thoracoscopic group (9,351.9±2,076.3) RMB (All Pvideo-assisted thoracoscopic approach, even though its expense is higher.

  7. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... NEI YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration ... Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: ...

  8. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...

  9. NEI You Tube Videos: Amblyopia

    Science.gov (United States)

    ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...

  10. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed ... Activity Role of Body Weight in Osteoarthritis Educational Videos for Patients Rheumatoid Arthritis Educational Video Series Psoriatic ...

  11. An Innovative Streaming Video System With a Point-of-View Head Camera Transmission of Surgeries to Smartphones and Tablets: An Educational Utility.

    Science.gov (United States)

    Chaves, Rafael Oliveira; de Oliveira, Pedro Armando Valente; Rocha, Luciano Chaves; David, Joacy Pedro Franco; Ferreira, Sanmari Costa; Santos, Alex de Assis Santos Dos; Melo, Rômulo Müller Dos Santos; Yasojima, Edson Yuzur; Brito, Marcus Vinicius Henriques

    2017-10-01

    In order to engage medical students and residents from public health centers to utilize the telemedicine features of surgery on their own smartphones and tablets as an educational tool, an innovative streaming system was developed with the purpose of streaming live footage from open surgeries to smartphones and tablets, allowing the visualization of the surgical field from the surgeon's perspective. The current study aims to describe the results of an evaluation on level 1 of Kirkpatrick's Model for Evaluation of the streaming system usage during gynecological surgeries, based on the perception of medical students and gynecology residents. Consisted of a live video streaming (from the surgeon's point of view) of gynecological surgeries for smartphones and tablets, one for each volunteer. The volunteers were able to connect to the local wireless network, created by the streaming system, through an access password and watch the video transmission on a web browser on their smartphones. Then, they answered a Likert-type questionnaire containing 14 items about the educational applicability of the streaming system, as well as comparing it to watching an in loco procedure. This study is formally approved by the local ethics commission (Certificate No. 53175915.7.0000.5171/2016). Twenty-one volunteers participated, totalizing 294 items answered, in which 94.2% were in agreement with the items affirmative, 4.1% were neutral, and only 1.7% answers corresponded to negative impressions. Cronbach's α was .82, which represents a good reliability level. Spearman's coefficients were highly significant in 4 comparisons and moderately significant in the other 20 comparisons. This study presents a local streaming video system of live surgeries to smartphones and tablets and shows its educational utility, low cost, and simple usage, which offers convenience and satisfactory image resolution, thus being potentially applicable in surgical teaching.

  12. An integrated video-analysis software system designed for movement detection and sleep analysis. Validation of a tool for the behavioural study of sleep.

    Science.gov (United States)

    Scatena, Michele; Dittoni, Serena; Maviglia, Riccardo; Frusciante, Roberto; Testani, Elisa; Vollono, Catello; Losurdo, Anna; Colicchio, Salvatore; Gnoni, Valentina; Labriola, Claudio; Farina, Benedetto; Pennisi, Mariano Alberto; Della Marca, Giacomo

    2012-02-01

    The aim of the present study was to develop and validate a software tool for the detection of movements during sleep, based on the automated analysis of video recordings. This software is aimed to detect and quantify movements and to evaluate periods of sleep and wake. We applied an open-source software, previously distributed on the web (Zoneminder, ZM), meant for video surveillance. A validation study was performed: computed movement analysis was compared with two standardised, 'gold standard' methods for the analysis of sleep-wake cycles: actigraphy and laboratory-based video-polysomnography. Sleep variables evaluated by ZM were not different from those measured by traditional sleep-scoring systems. Bland-Altman plots showed an overlap between the scores obtained with ZM, PSG and actigraphy, with a slight tendency of ZM to overestimate nocturnal awakenings. ZM showed a good degree of accuracy both with respect to PSG (79.9%) and actigraphy (83.1%); and had very high sensitivity (ZM vs. PSG: 90.4%; ZM vs. actigraphy: 89.5%) and relatively lower specificity (ZM vs. PSG: 42.3%; ZM vs. actigraphy: 65.4%). The computer-assisted motion analysis is reliable and reproducible, and it can allow a reliable esteem of some sleep and wake parameters. The motion-based sleep analysis shows a trend to overestimate wakefulness. The possibility to measure sleep from video recordings may be useful in those clinical and experimental conditions in which traditional PSG studies may not be performed. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  13. Video Design Games

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte; Christensen, Kasper Skov; Iversen, Ole Sejer

    We introduce Video Design Games to train educators in teaching design. The Video Design Game is a workshop format consisting of three rounds in which participants observe, reflect and generalize based on video snippets from their own practice. The paper reports on a Video Design Game workshop...

  14. Characterization of social video

    Science.gov (United States)

    Ostrowski, Jeffrey R.; Sarhan, Nabil J.

    2009-01-01

    The popularity of social media has grown dramatically over the World Wide Web. In this paper, we analyze the video popularity distribution of well-known social video websites (YouTube, Google Video, and the AOL Truveo Video Search engine) and characterize their workload. We identify trends in the categories, lengths, and formats of those videos, as well as characterize the evolution of those videos over time. We further provide an extensive analysis and comparison of video content amongst the main regions of the world.

  15. Video scan rate conversion method and apparatus for achieving same

    Science.gov (United States)

    Mills, George T.

    1992-12-01

    In a video system, a video signal operates at a vertical scan rate to generate a first video frame characterized by a first number of lines per frame. A method and apparatus are provided to convert the first video frame into a second video frame characterized by a second number of lines per frame. The first video frame is stored at the vertical scan rate as digital samples. A portion of the stored digital samples from each line of the first video frame are retrieved at the vertical scan rate. The number of digital samples in the retrieved portion from each line of the first video frame is governed by a ratio equal to the second number divided by the first number, such that the retrieved portion from the first video frame is the second video frame.

  16. Video time encoding machines.

    Science.gov (United States)

    Lazar, Aurel A; Pnevmatikakis, Eftychios A

    2011-03-01

    We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value.

  17. Video visual analytics

    OpenAIRE

    Höferlin, Markus Johannes

    2013-01-01

    The amount of video data recorded world-wide is tremendously growing and has already reached hardly manageable dimensions. It originates from a wide range of application areas, such as surveillance, sports analysis, scientific video analysis, surgery documentation, and entertainment, and its analysis represents one of the challenges in computer science. The vast amount of video data renders manual analysis by watching the video data impractical. However, automatic evaluation of video material...

  18. Upgrades to NDSF Vehicle Camera Systems and Development of a Prototype System for Migrating and Archiving Video Data in the National Deep Submergence Facility Archives at WHOI

    Science.gov (United States)

    Fornari, D.; Howland, J.; Lerner, S.; Gegg, S.; Walden, B.; Bowen, A.; Lamont, M.; Kelley, D.

    2003-12-01

    . We report on an archiving effort to transfer video footage currently on Hi-8 and VHS tape to digital media (DVD). At the same time as this is being done, frame grab imagery at reasonable resolution (640x480) at 30 sec. intervals will be compiled and the images will be integrated, as much as possible with vehicle attitude/navigation data and provided to the user community in a web-browser format, such as has already been done for the recent Jason and Alvin frame grabbed imagery. The frame-grabbed images will be tagged with time, thereby permitting integration of vehicle attitude and navigation data once that is available. In order to prototype this system, we plan to utilize data from the East Pacific Rise and Juan de Fuca Ridge which are field areas selected by the community as Ridge2000 Integrated Study Sites. There are over 500 Alvin dives in both these areas and having frame-grabbed, synoptic views of the terrains covered during those dives will be invaluable for scientific and outreach use as part of Ridge2000. We plan to coordinate this activity with the Ridge2000 Data Management Office at LDEO.

  19. User-centric video delay measurements

    NARCIS (Netherlands)

    A.J. Jansen (Jack); D.C.A. Bulterman (Dick)

    2013-01-01

    htmlabstractThe complexities and physical constraints associated with video transmission make the introduction of video playout delays unavoidable. Tuning systems to reduce delay requires an ability to effectively and easily gather delay metrics on a potentially wide range of systems. In order to

  20. Investigating helmet promotion for cyclists: results from a randomised study with observation of behaviour, using a semi-automatic video system.

    Directory of Open Access Journals (Sweden)

    Aymery Constant

    Full Text Available INTRODUCTION: Half of fatal injuries among bicyclists are head injuries. While helmet use is likely to provide protection, their use often remains rare. We assessed the influence of strategies for promotion of helmet use with direct observation of behaviour by a semi-automatic video system. METHODS: We performed a single-centre randomised controlled study, with 4 balanced randomisation groups. Participants were non-helmet users, aged 18-75 years, recruited at a loan facility in the city of Bordeaux, France. After completing a questionnaire investigating their attitudes towards road safety and helmet use, participants were randomly assigned to three groups with the provision of "helmet only", "helmet and information" or "information only", and to a fourth control group. Bikes were labelled with a colour code designed to enable observation of helmet use by participants while cycling, using a 7-spot semi-automatic video system located in the city. A total of 1557 participants were included in the study. RESULTS: Between October 15th 2009 and September 28th 2010, 2621 cyclists' movements, made by 587 participants, were captured by the video system. Participants seen at least once with a helmet amounted to 6.6% of all observed participants, with higher rates in the two groups that received a helmet at baseline. The likelihood of observed helmet use was significantly increased among participants of the "helmet only" group (OR = 7.73 [2.09-28.5] and this impact faded within six months following the intervention. No effect of information delivery was found. CONCLUSION: Providing a helmet may be of value, but will not be sufficient to achieve high rates of helmet wearing among adult cyclists. Integrated and repeated prevention programmes will be needed, including free provision of helmets, but also information on the protective effect of helmets and strategies to increase peer and parental pressure.

  1. Investigating helmet promotion for cyclists: results from a randomised study with observation of behaviour, using a semi-automatic video system.

    Science.gov (United States)

    Constant, Aymery; Messiah, Antoine; Felonneau, Marie-Line; Lagarde, Emmanuel

    2012-01-01

    Half of fatal injuries among bicyclists are head injuries. While helmet use is likely to provide protection, their use often remains rare. We assessed the influence of strategies for promotion of helmet use with direct observation of behaviour by a semi-automatic video system. We performed a single-centre randomised controlled study, with 4 balanced randomisation groups. Participants were non-helmet users, aged 18-75 years, recruited at a loan facility in the city of Bordeaux, France. After completing a questionnaire investigating their attitudes towards road safety and helmet use, participants were randomly assigned to three groups with the provision of "helmet only", "helmet and information" or "information only", and to a fourth control group. Bikes were labelled with a colour code designed to enable observation of helmet use by participants while cycling, using a 7-spot semi-automatic video system located in the city. A total of 1557 participants were included in the study. Between October 15th 2009 and September 28th 2010, 2621 cyclists' movements, made by 587 participants, were captured by the video system. Participants seen at least once with a helmet amounted to 6.6% of all observed participants, with higher rates in the two groups that received a helmet at baseline. The likelihood of observed helmet use was significantly increased among participants of the "helmet only" group (OR = 7.73 [2.09-28.5]) and this impact faded within six months following the intervention. No effect of information delivery was found. Providing a helmet may be of value, but will not be sufficient to achieve high rates of helmet wearing among adult cyclists. Integrated and repeated prevention programmes will be needed, including free provision of helmets, but also information on the protective effect of helmets and strategies to increase peer and parental pressure.

  2. Evaluation of the DTBird video-system at the Smoela wind-power plant. Detection capabilities for capturing near-turbine avian behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Roel, May; Hamre, Oeyvind; Vang, Roald; Nygaard, Torgeir

    2012-07-01

    Collisions between birds and wind turbines can be a problem at wind-power plants both onshore and offshore, and the presence of endangered bird species or proximity to key functional bird areas can have major impact on the choice of site or location wind turbines. There is international consensus that one of the mail challenges in the development of measures to reduce bird collisions is the lack of good methods for assessment of the efficacy of inventions. In order to be better abe to assess the efficacy of mortality-reducing measures Statkraft wishes to find a system that can be operated under Norwegian conditions and that renders objective and quantitative information on collisions and near-flying birds. DTbird developed by Liquen Consultoria Ambiental S.L. is such a system, which is based on video-recording bird flights near turbines during the daylight period (light levels>200 lux). DTBird is a self-working system developed to detect flying birds and to take programmed actions (i.e. warming, dissuasion, collision registration, and turbine stop control) linked to real-time bird detection. This report evaluates how well the DTBird system is able to detect birds in the vicinity of a wind turbine, and assess to which extent it can be utilized to study near-turbine bird flight behaviour and possible deterrence. The evaluation was based on the video sequence recorded with the DTBird systems installed at turbine 21 and turbine 42 at the Smoela wind-power plant between March 2 2012 and September 30 2012, together with GPS telemetry data on white-tailed eagles and avian radar data. The average number of falsely triggered video sequences (false positive rate) was 1.2 per day, and during daytime the DTBird system recorded between 76% and 96% of all bird flights in the vicinity of the turbines. Visually estimated distances of recorded bird flights in the video sequences were in general assessed to be farther from the turbines com pared to the distance settings used within

  3. Making Sense of Video Analytics: Lessons Learned from Clickstream Interactions, Attitudes, and Learning Outcome in a Video-Assisted Course

    Directory of Open Access Journals (Sweden)

    Michail N. Giannakos

    2015-02-01

    Full Text Available Online video lectures have been considered an instructional media for various pedagogic approaches, such as the flipped classroom and open online courses. In comparison to other instructional media, online video affords the opportunity for recording student clickstream patterns within a video lecture. Video analytics within lecture videos may provide insights into student learning performance and inform the improvement of video-assisted teaching tactics. Nevertheless, video analytics are not accessible to learning stakeholders, such as researchers and educators, mainly because online video platforms do not broadly share the interactions of the users with their systems. For this purpose, we have designed an open-access video analytics system for use in a video-assisted course. In this paper, we present a longitudinal study, which provides valuable insights through the lens of the collected video analytics. In particular, we found that there is a relationship between video navigation (repeated views and the level of cognition/thinking required for a specific video segment. Our results indicated that learning performance progress was slightly improved and stabilized after the third week of the video-assisted course. We also found that attitudes regarding easiness, usability, usefulness, and acceptance of this type of course remained at the same levels throughout the course. Finally, we triangulate analytics from diverse sources, discuss them, and provide the lessons learned for further development and refinement of video-assisted courses and practices.

  4. Online scene change detection of multicast (MBone) video

    Science.gov (United States)

    Zhou, Wensheng; Shen, Ye; Vellaikal, Asha; Kuo, C.-C. Jay

    1998-10-01

    Many multimedia applications, such as multimedia data management systems and communication systems, require efficient representation of multimedia content. Thus semantic interpretation of video content has been a popular research area. Currently, most content-based video representation involves the segmentation of video based on key frames which are generated using scene change detection techniques as well as camera/object motion. Then, video features can be extracted from key frames. However most of such research performs off-line video processing in which the whole video scope is known as a priori which allows multiple scans of the stored video files during video processing. In comparison, relatively not much research has been done in the area of on-line video processing, which is crucial in video communication applications such as on-line collaboration, news broadcasts and so on. Our research investigates on-line real-time scene change detection of multicast video over the Internet. Our on-line processing system are designed to meet the requirements of real-time video multicasting over the Internet and to utilize the successful video parsing techniques available today. The proposed algorithms extract key frames from video bitstreams sent through the MBone network, and the extracted key frames are multicasted as annotations or metadata over a separate channel to assist in content filtering such as those anticipated to be in use by on-line filtering proxies in the Internet. The performance of the proposed algorithms are demonstrated and discussed in this paper.

  5. Port Video and Logo

    OpenAIRE

    Whitehead, Stuart; Rush, Joshua

    2013-01-01

    Logo PDF files should be accessible by any PDF reader such as Adobe Reader. SVG files of the logo are vector graphics accessible by programs such as Inkscape or Adobe Illustrator. PNG files are image files of the logo that should be able to be opened by any operating system's default image viewer. The final report is submitted in both .doc (Microsoft Word) and .pdf formats. The video is submitted in .avi format and can be viewed with Windows Media Player or VLC. Audio .wav files are also ...

  6. Video Playback Modifications for a DSpace Repository

    Directory of Open Access Journals (Sweden)

    Keith Gilbertson

    2016-01-01

    Full Text Available This paper focuses on modifications to an institutional repository system using the open source DSpace software to support playback of digital videos embedded within item pages. The changes were made in response to the formation and quick startup of an event capture group within the library that was charged with creating and editing video recordings of library events and speakers. This paper specifically discusses the selection of video formats, changes to the visual theme of the repository to allow embedded playback and captioning support, and modifications and bug fixes to the file downloading subsystem to enable skip-ahead playback of videos via byte-range requests. This paper also describes workflows for transcoding videos in the required formats, creating captions, and depositing videos into the repository.

  7. Implementation of video surveillance in crime control

    Directory of Open Access Journals (Sweden)

    Kovačević-Lepojević Marina

    2012-01-01

    Full Text Available Modern trends in crime control include a variety of technological innovations, including video surveillance systems. The aim of this paper is to review the implementation of video surveillance in contemporary context, considering fundamental theoretical aspects, the legislation and the effectiveness in controlling crime. While considering the theoretical source of ideas on the implementation of video surveillance, priority was given to the concept of situational prevention that focuses on the contextual factors of crime. Capacities for the implementation of video surveillance in Serbia are discussed based on the analysis of the relevant international and domestic legislation, the shortcomings in regulation of this area and possible solutions. Special attention was paid to the effectiveness of video surveillance in public places, in schools and prisons. Starting from the results of studies of video surveillance effectiveness, strengths and weaknesses of these measures and recommendations for improving practice were discussed.

  8. Web Audio/Video Streaming Tool

    Science.gov (United States)

    Guruvadoo, Eranna K.

    2003-01-01

    In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.

  9. A new method for wireless video monitoring of bird nests

    Science.gov (United States)

    David I. King; Richard M. DeGraaf; Paul J. Champlin; Tracey B. Champlin

    2001-01-01

    Video monitoring of active bird nests is gaining popularity among researchers because it eliminates many of the biases associated with reliance on incidental observations of predation events or use of artificial nests, but the expense of video systems may be prohibitive. Also, the range and efficiency of current video monitoring systems may be limited by the need to...

  10. Roadside video data analysis deep learning

    CERN Document Server

    Verma, Brijesh; Stockwell, David

    2017-01-01

    This book highlights the methods and applications for roadside video data analysis, with a particular focus on the use of deep learning to solve roadside video data segmentation and classification problems. It describes system architectures and methodologies that are specifically built upon learning concepts for roadside video data processing, and offers a detailed analysis of the segmentation, feature extraction and classification processes. Lastly, it demonstrates the applications of roadside video data analysis including scene labelling, roadside vegetation classification and vegetation biomass estimation in fire risk assessment.

  11. Video Head Impulse Tests with a Remote Camera System: Normative Values of Semicircular Canal Vestibulo-Ocular Reflex Gain in Infants and Children

    Directory of Open Access Journals (Sweden)

    Sylvette R. Wiener-Vacher

    2017-09-01

    Full Text Available The video head impulse test (VHIT is widely used to identify semicircular canal function impairments in adults. But classical VHIT testing systems attach goggles tightly to the head, which is not tolerated by infants. Remote video detection of head and eye movements resolves this issue and, here, we report VHIT protocols and normative values for children. Vestibulo-ocular reflex (VOR gain was measured for all canals of 303 healthy subjects, including 274 children (aged 2.6 months–15 years and 26 adults (aged 16–67. We used the Synapsys® (Marseilles, France VHIT Ulmer system whose remote camera measures head and eye movements. HITs were performed at high velocities. Testing typically lasts 5–10 min. In infants as young as 3 months old, VHIT yielded good inter-measure replicability. VOR gain increases rapidly until about the age of 6 years (with variation among canals, then progresses more slowly to reach adult values by the age of 16. Values are more variable among very young children and for the vertical canals, but showed no difference for right versus left head rotations. Normative values of VOR gain are presented to help detect vestibular impairment in patients. VHIT testing prior to cochlear implants could help prevent total vestibular loss and the resulting grave impairments of motor and cognitive development in patients with residual unilateral vestibular function.

  12. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    Science.gov (United States)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  13. Significantly improved precision of cell migration analysis in time-lapse video microscopy through use of a fully automated tracking system

    Directory of Open Access Journals (Sweden)

    Seufferlein Thomas

    2010-04-01

    Full Text Available Abstract Background Cell motility is a critical parameter in many physiological as well as pathophysiological processes. In time-lapse video microscopy, manual cell tracking remains the most common method of analyzing migratory behavior of cell populations. In addition to being labor-intensive, this method is susceptible to user-dependent errors regarding the selection of "representative" subsets of cells and manual determination of precise cell positions. Results We have quantitatively analyzed these error sources, demonstrating that manual cell tracking of pancreatic cancer cells lead to mis-calculation of migration rates of up to 410%. In order to provide for objective measurements of cell migration rates, we have employed multi-target tracking technologies commonly used in radar applications to develop fully automated cell identification and tracking system suitable for high throughput screening of video sequences of unstained living cells. Conclusion We demonstrate that our automatic multi target tracking system identifies cell objects, follows individual cells and computes migration rates with high precision, clearly outperforming manual procedures.

  14. Privacy-protecting video surveillance

    Science.gov (United States)

    Wickramasuriya, Jehan; Alhazzazi, Mohanned; Datt, Mahesh; Mehrotra, Sharad; Venkatasubramanian, Nalini

    2005-02-01

    Forms of surveillance are very quickly becoming an integral part of crime control policy, crisis management, social control theory and community consciousness. In turn, it has been used as a simple and effective solution to many of these problems. However, privacy-related concerns have been expressed over the development and deployment of this technology. Used properly, video cameras help expose wrongdoing but typically come at the cost of privacy to those not involved in any maleficent activity. This work describes the design and implementation of a real-time, privacy-protecting video surveillance infrastructure that fuses additional sensor information (e.g. Radio-frequency Identification) with video streams and an access control framework in order to make decisions about how and when to display the individuals under surveillance. This video surveillance system is a particular instance of a more general paradigm of privacy-protecting data collection. In this paper we describe in detail the video processing techniques used in order to achieve real-time tracking of users in pervasive spaces while utilizing the additional sensor data provided by various instrumented sensors. In particular, we discuss background modeling techniques, object tracking and implementation techniques that pertain to the overall development of this system.

  15. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ... group for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support ...

  16. Video Games and Citizenship

    National Research Council Canada - National Science Library

    Bourgonjon, Jeroen; Soetaert, Ronald

    2013-01-01

    ... by exploring a particular aspect of digitization that affects young people, namely video games. They explore the new social spaces which emerge in video game culture and how these spaces relate to community building and citizenship...

  17. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Doctor Find a Provider Meet the Team Blog Articles News Resources Links Videos Podcasts Webinars For the ... Doctor Find a Provider Meet the Team Blog Articles News Provider Directory Donate Resources Links Videos Podcasts ...

  18. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos ...

  19. Digital Video in Research

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2012-01-01

    questions of our media literacy pertaining to authoring multimodal texts (visual, verbal, audial, etc.) in research practice and the status of multimodal texts in academia. The implications of academic video extend to wider issues of how researchers harness opportunities to author different types of texts......Is video becoming “the new black” in academia, if so, what are the challenges? The integration of video in research methodology (for collection, analysis) is well-known, but the use of “academic video” for dissemination is relatively new (Eriksson and Sørensen). The focus of this paper is academic...... video, or short video essays produced for the explicit purpose of communicating research processes, topics, and research-based knowledge (see the journal of academic videos: www.audiovisualthinking.org). Video is increasingly used in popular showcases for video online, such as YouTube and Vimeo, as well...

  20. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Back Support Groups Is a support group for me? Find a Group Upcoming Events Video Library Photo ... Support Groups Back Is a support group for me? Find a group Back Upcoming events Video Library ...

  1. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork ... for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ...

  2. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For the Media For Clinicians For ... Family Caregivers Glossary Menu In this section Links Videos Podcasts Webinars For the Media For Clinicians For ...

  3. Videos, Podcasts and Livechats

    Science.gov (United States)

    ... the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For the Media For Clinicians For ... Family Caregivers Glossary Menu In this section Links Videos Podcasts Webinars For the Media For Clinicians For ...

  4. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For the Media ... a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos Podcasts Webinars ...

  5. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer ... me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork ...

  6. Video Screen Capture Basics

    Science.gov (United States)

    Dunbar, Laura

    2014-01-01

    This article is an introduction to video screen capture. Basic information of two software programs, QuickTime for Mac and BlueBerry Flashback Express for PC, are also discussed. Practical applications for video screen capture are given.

  7. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... News Resources Links Videos Podcasts Webinars For the Media For Clinicians For Policymakers For Family Caregivers Glossary ... this section Links Videos Podcasts Webinars For the Media For Clinicians For Policymakers For Family Caregivers Glossary ...

  8. Video Coding Technique using MPEG Compression Standards

    African Journals Online (AJOL)

    Akorede

    The two dimensional discrete cosine transform (2-D DCT) is an integral part of video and image compression, which is used ... Park, 1989). MPEG-1 systems and MPEG-2 video have been developed collaboratively with the International. Telecommunications Union- (ITU-T). The DVB selected. MPEG-2 added specifications ...

  9. Genre-Specific Semantic Video Indexing

    NARCIS (Netherlands)

    Wu, J.; Worring, M.

    2010-01-01

    In many applications, we find large video collections from different genres where the user is often only interested in one or two specific video genres. So, when users are querying the system with a specific semantic concept, they are likely aiming a genre specific instantiation of this concept.

  10. Video Conferencing for a Virtual Seminar Room

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Fosgerau, A.; Hansen, Peter Søren K.

    2002-01-01

    A PC-based video conferencing system for a virtual seminar room is presented. The platform is enhanced with DSPs for audio and video coding and processing. A microphone array is used to facilitate audio based speaker tracking, which is used for adaptive beam-forming and automatic camera-control...

  11. Content-based TV sports video retrieval using multimodal analysis

    Science.gov (United States)

    Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru

    2003-09-01

    In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.

  12. Managed Video as a Service for a Video Surveillance Model

    Directory of Open Access Journals (Sweden)

    Dan Benta

    2009-01-01

    Full Text Available The increasing demand for security systems hasresulted in rapid development of video surveillance and videosurveillance has turned into a major area of interest andmanagement challenge. Personal experience in specializedcompanies helped me to adapt demands of users of videosecurity systems to system performance. It is known thatpeople wish to obtain maximum profit with minimum effort,but security is not neglected. Surveillance systems and videomonitoring should provide only necessary information and torecord only when there is activity. Via IP video surveillanceservices provides more safety in this sector, being able torecord information on servers located in other locations thanthe IP cameras. Also, these systems allow real timemonitoring of goods or activities that take place in supervisedperimeters. View live and recording can be done via theInternet from any computer, using a web browser. Access tothe surveillance system is granted after a user and passwordauthentication.

  13. Making good physics videos

    Science.gov (United States)

    Lincoln, James

    2017-05-01

    Online videos are an increasingly important way technology is contributing to the improvement of physics teaching. Students and teachers have begun to rely on online videos to provide them with content knowledge and instructional strategies. Online audiences are expecting greater production value, and departments are sometimes requesting educators to post video pre-labs or to flip our classrooms. In this article, I share my advice on creating engaging physics videos.

  14. Desktop video conferencing

    OpenAIRE

    Potter, Ray; Roberts, Deborah

    2007-01-01

    This guide aims to provide an introduction to Desktop Video Conferencing. You may be familiar with video conferencing, where participants typically book a designated conference room and communicate with another group in a similar room on another site via a large screen display. Desktop video conferencing (DVC), as the name suggests, allows users to video conference from the comfort of their own office, workplace or home via a desktop/laptop Personal Computer. DVC provides live audio and visua...

  15. 47 CFR 79.3 - Video description of video programming.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Video description of video programming. 79.3... CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING § 79.3 Video description of video programming. (a) Definitions. For purposes of this section the following definitions shall apply: (1...

  16. Fragile watermarking scheme for H.264 video authentication

    Science.gov (United States)

    Wang, Chuen-Ching; Hsu, Yu-Chang

    2010-02-01

    A novel H.264 advanced video coding fragile watermarking method is proposed that enables the authenticity and integrity of the video streams to be verified. The effectiveness of the proposed scheme is demonstrated by way of experimental simulations. The results show that by embedding the watermark information in the last nonzero-quantized coefficient in each discrete cosine transform block, the proposed scheme induces no more than a minor distortion of the video content. In addition, we show that the proposed scheme is able to detect unauthorized changes in the watermarked video content without the original video. The experimental results demonstrate the feasibility of the proposed video authentication system.

  17. Video Self-Modeling

    Science.gov (United States)

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  18. Tracing Sequential Video Production

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin; Khalid, Md. Saifuddin

    2015-01-01

    With an interest in learning that is set in collaborative situations, the data session presents excerpts from video data produced by two of fifteen students from a class of 5th semester techno-anthropology course. Students used video cameras to capture the time they spent working with a scientist...... video, nature of the interactional space, and material and spatial semiotics....

  19. Developing a Promotional Video

    Science.gov (United States)

    Epley, Hannah K.

    2014-01-01

    There is a need for Extension professionals to show clientele the benefits of their program. This article shares how promotional videos are one way of reaching audiences online. An example is given on how a promotional video has been used and developed using iMovie software. Tips are offered for how professionals can create a promotional video and…

  20. System identification to characterize human use of ethanol based on generative point-process models of video games with ethanol rewards.

    Science.gov (United States)

    Ozil, Ipek; Plawecki, Martin H; Doerschuk, Peter C; O'Connor, Sean J

    2011-01-01

    The influence of family history and genetics on the risk for the development of abuse or dependence is a major theme in alcoholism research. Recent research have used endophenotypes and behavioral paradigms to help detect further genetic contributions to this disease. Electronic tasks, essentially video games, which provide alcohol as a reward in controlled environments and with specified exposures have been developed to explore some of the behavioral and subjective characteristics of individuals with or at risk for alcohol substance use disorders. A generative model (containing parameters with unknown values) of a simple game involving a progressive work paradigm is described along with the associated point process signal processing that allows system identification of the model. The system is demonstrated on human subject data. The same human subject completing the task under different circumstances, e.g., with larger and smaller alcohol reward values, is assigned different parameter values. Potential meanings of the different parameter values are described.

  1. Objective video presentation QoE predictor for smart adaptive video streaming

    Science.gov (United States)

    Wang, Zhou; Zeng, Kai; Rehman, Abdul; Yeganeh, Hojatollah; Wang, Shiqi

    2015-09-01

    How to deliver videos to consumers over the network for optimal quality-of-experience (QoE) has been the central goal of modern video delivery services. Surprisingly, regardless of the large volume of videos being delivered everyday through various systems attempting to improve visual QoE, the actual QoE of end consumers is not properly assessed, not to say using QoE as the key factor in making critical decisions at the video hosting, network and receiving sites. Real-world video streaming systems typically use bitrate as the main video presentation quality indicator, but using the same bitrate to encode different video content could result in drastically different visual QoE, which is further affected by the display device and viewing condition of each individual consumer who receives the video. To correct this, we have to put QoE back to the driver's seat and redesign the video delivery systems. To achieve this goal, a major challenge is to find an objective video presentation QoE predictor that is accurate, fast, easy-to-use, display device adaptive, and provides meaningful QoE predictions across resolution and content. We propose to use the newly developed SSIMplus index (https://ece.uwaterloo.ca/~z70wang/research/ssimplus/) for this role. We demonstrate that based on SSIMplus, one can develop a smart adaptive video streaming strategy that leads to much smoother visual QoE impossible to achieve using existing adaptive bitrate video streaming approaches. Furthermore, SSIMplus finds many more applications, in live and file-based quality monitoring, in benchmarking video encoders and transcoders, and in guiding network resource allocations.

  2. Comparison of Cormack Lehane Grading System and Intubation Difficulty Score in Patients Intubated by D-Blade Video and Direct Macintosh Laryngoscope: A Randomized Controlled Study

    Science.gov (United States)

    Pažur, Iva; Maldini, Branka; Hostić, Vedran; Ožegić, Ognjen; Obraz, Melanija

    2016-12-01

    D-blade is a relatively new device in the field of videolaryngoscopy, designed for airway management by enabling indirectoscopic glottic view. In our study, we investigated efficiency of D-blade in comparison with direct Macintosh laryngoscope (gold standard). Fifty-two adult patients with normal airway scheduled for elective surgery in general anesthesia were randomly assigned in D-blade video or direct Macintosh group. In the first video group, patients were laryngo-scoped and intubated by D-blade, and in the second group laryngoscopy and intubation were performed by Macintosh laryngoscope. Glottic view was evaluated according to Cormack Lehane grading system (C-L), while duration of intubation and easiness of intubation were evaluated according to the intubation difficulty score (IDS). Additionally, hemodynamic parameters were recorded before and after induction. There were no statistically significant between-group differences in time to intubation, easiness of endotracheal tube insertion, C-L, and IDS. In comparison with direct Macintosh laryngoscope, D-blade showed similar but still favorable characteristics. In our opinion, D-blade is a useful device in airway management and should be used in daily anesthesiologist work.

  3. A Depth Video-based Human Detection and Activity Recognition using Multi-features and Embedded Hidden Markov Models for Health Care Monitoring Systems

    Directory of Open Access Journals (Sweden)

    Ahmad Jalal

    2017-08-01

    Full Text Available Increase in number of elderly people who are living independently needs especial care in the form of healthcare monitoring systems. Recent advancements in depth video technologies have made human activity recognition (HAR realizable for elderly healthcare applications. In this paper, a depth video-based novel method for HAR is presented using robust multi-features and embedded Hidden Markov Models (HMMs to recognize daily life activities of elderly people living alone in indoor environment such as smart homes. In the proposed HAR framework, initially, depth maps are analyzed by temporal motion identification method to segment human silhouettes from noisy background and compute depth silhouette area for each activity to track human movements in a scene. Several representative features, including invariant, multi-view differentiation and spatiotemporal body joints features were fused together to explore gradient orientation change, intensity differentiation, temporal variation and local motion of specific body parts. Then, these features are processed by the dynamics of their respective class and learned, modeled, trained and recognized with specific embedded HMM having active feature values. Furthermore, we construct a new online human activity dataset by a depth sensor to evaluate the proposed features. Our experiments on three depth datasets demonstrated that the proposed multi-features are efficient and robust over the state of the art features for human action and activity recognition.

  4. Economical Video Monitoring of Traffic

    Science.gov (United States)

    Houser, B. C.; Paine, G.; Rubenstein, L. D.; Parham, O. Bruce, Jr.; Graves, W.; Bradley, C.

    1986-01-01

    Data compression allows video signals to be transmitted economically on telephone circuits. Telephone lines transmit television signals to remote traffic-control center. Lines also carry command signals from center to TV camera and compressor at highway site. Video system with television cameras positioned at critical points on highways allows traffic controllers to determine visually, almost immediately, exact cause of traffic-flow disruption; e.g., accidents, breakdowns, or spills, almost immediately. Controllers can then dispatch appropriate emergency services and alert motorists to minimize traffic backups.

  5. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... are available, what is happening in the immune system and what other conditions are associated with RA. Learning more about your condition will allow you to take a more active role in your care. The information in these videos should not take the place ...

  6. Scan converting video tape recorder

    Science.gov (United States)

    Holt, N. I. (Inventor)

    1971-01-01

    A video tape recorder is disclosed of sufficient bandwidth to record monochrome television signals or standard NTSC field sequential color at current European and American standards. The system includes scan conversion means for instantaneous playback at scanning standards different from those at which the recording is being made.

  7. Detectors for scanning video imagers

    Science.gov (United States)

    Webb, Robert H.; Hughes, George W.

    1993-11-01

    In scanning video imagers, a single detector sees each pixel for only 100 ns, so the bandwidth of the detector needs to be about 10 MHz. How this fact influences the choice of detectors for scanning systems is described here. Some important parametric quantities obtained from manufacturer specifications are related and it is shown how to compare detectors when specified quantities differ.

  8. Video streaming into the mainstream.

    Science.gov (United States)

    Garrison, W

    2001-12-01

    Changes in Internet technology are making possible the delivery of a richer mixture of media through data streaming. High-quality, dynamic content, such as video and audio, can be incorporated into Websites simply, flexibly and interactively. Technologies such as G3 mobile communication, ADSL, cable and satellites enable new ways of delivering medical services, information and learning. Systems such as Quicktime, Windows Media and Real Video provide reliable data streams as video-on-demand and users can tailor the experience to their own interests. The Learning Development Centre at the University of Portsmouth have used streaming technologies together with e-learning tools such as dynamic HTML, Flash, 3D objects and online assessment successfully to deliver on-line course content in economics and earth science. The Lifesign project--to develop, catalogue and stream health sciences media for teaching--is described and future medical applications are discussed.

  9. VBR video traffic models

    CERN Document Server

    Tanwir, Savera

    2014-01-01

    There has been a phenomenal growth in video applications over the past few years. An accurate traffic model of Variable Bit Rate (VBR) video is necessary for performance evaluation of a network design and for generating synthetic traffic that can be used for benchmarking a network. A large number of models for VBR video traffic have been proposed in the literature for different types of video in the past 20 years. Here, the authors have classified and surveyed these models and have also evaluated the models for H.264 AVC and MVC encoded video and discussed their findings.

  10. Architectural Considerations for Highly Scalable Computing to Support On-demand Video Analytics

    Science.gov (United States)

    2017-04-19

    demand video intelligence ; intelligent video system; video analytics platform I. INTRODUCTION Video Analytics systems has been of tremendous interest...in various fields due to its utility [1]. A survey of Intelligent Video Systems and Analytics [2] highlights the importance of such systems in a...not, running each analytic pass in a separate process is an option. As can be surmised from values in Table I, this option has slight performance

  11. Flip Video for Dummies

    CERN Document Server

    Hutsko, Joe

    2010-01-01

    The full-color guide to shooting great video with the Flip Video camera. The inexpensive Flip Video camera is currently one of the hottest must-have gadgets. It's portable and connects easily to any computer to transfer video you shoot onto your PC or Mac. Although the Flip Video camera comes with a quick-start guide, it lacks a how-to manual, and this full-color book fills that void! Packed with full-color screen shots throughout, Flip Video For Dummies shows you how to shoot the best possible footage in a variety of situations. You'll learn how to transfer video to your computer and then edi

  12. Ecological Assessment of Autonomy in Instrumental Activities of Daily Living in Dementia Patients by the means of an Automatic Video Monitoring System

    Directory of Open Access Journals (Sweden)

    Alexandra eKönig

    2015-06-01

    Full Text Available Currently, the assessment of autonomy and functional ability involves clinical rating scales. However, scales are often limited in their ability to provide objective and sensitive information. In contrast, information and communication technologies may overcome these limitations by capturing more fully the functional, as well as cognitive disturbances associated with Alzheimer disease (AD. We investigated the quantitative assessment of the autonomy of dementia patients based not only on gait analysis but also on the participant performance on Instrumental Activities of Daily Living (IADL automatically recognized by a video event monitoring system (EMS. Three groups of participants (healthy controls, Mild Cognitive Impairment and AD patients had to carry out a standardized scenario consisting of physical tasks (single and dual task and several IADLs such as preparing a pillbox or making a phone call while being recorded. After, video sensor data was processed by an event monitoring system that automatically extracts kinematic parameters of the participants’ gait and recognizes their carried out activities. These parameters were then used for the assessment of the participants’ performance levels, here referred as autonomy. Autonomy assessment were approached as classification task using artificial intelligence methods that takes as input the parameters extracted by the event monitoring system, here referred as behavioral data. Activities were accurately recognized by the EMS with high precision. The most accurately recognized activities were: ‘prepare medication’ with 93% and ‘using phone’ with 89% precision. The diagnostic group classifier obtained a precision of 73.46% when combining the analyses of physical tasks with IADLs. In a further analysis, the created autonomy group classifier which obtained a precision of 83.67% when combining physical tasks and IADLs. Results suggest that it is possible to quantitatively assess IADL

  13. Fair Play? Violence, Gender and Race in Video Games.

    Science.gov (United States)

    Glaubke, Christina R.; Miller, Patti; Parker, McCrae A.; Espejo, Eileen

    Based on the view that the level of market penetration of video games combined with the high levels of realism portrayed in these games make it important to investigate the messages video games send children, this report details a study of the 10 top-selling video games for each of 6 game systems available in the United States and for personal…

  14. The Optimiser: monitoring and improving switching delays in video conferencing

    NARCIS (Netherlands)

    S. Gunkel (Simon); A.J. Jansen (Jack); I. Kegel; D.C.A. Bulterman (Dick); P.S. Cesar Garcia (Pablo Santiago)

    2014-01-01

    htmlabstractWith the growing popularity of video communication systems, more people are using group video chat, rather than only one-to-one video calls. In such multi-party sessions, remote participants compete for the available screen space and bandwidth. A common solution is showing the current

  15. Stream On: Video Servers in the Real World.

    Science.gov (United States)

    Tristram, Claire

    1995-01-01

    Despite plans for corporate training networks, digital ad-insertion systems, hotel video-on-demand, and interactive television, only small scale video networks presently work. Four case studies examine the design and implementation decisions for different markets: corporate; advertising; hotel; and commercial video via cable, satellite or…

  16. Video surveillance using JPEG 2000

    Science.gov (United States)

    Dufaux, Frederic; Ebrahimi, Touradj

    2004-11-01

    This paper describes a video surveillance system which is composed of three key components, smart cameras, a server, and clients, connected through IP-networks in wired or wireless configurations. The system has been designed so as to protect the privacy of people under surveillance. Smart cameras are based on JPEG 2000 compression where an analysis module allows for events detection and regions of interest identification. The resulting regions of interest can then be encoded with better quality and scrambled. Compressed video streams are scrambled and signed for the purpose of privacy and data integrity verification using JPSEC compliant methods. The same bitstream may also be protected for robustness to transmission errors based on JPWL compliant methods. The server receives, stores, manages and transmits the video sequences on wired and wireless channels to a variety of clients and users with different device capabilities, channel characteristics and preferences. Use of seamless scalable coding of video sequences prevents any need for transcoding operations at any point in the system.

  17. Ontology based approach for video transmission over the network

    OpenAIRE

    Rachit Mohan Garg; Yamini Sood; Neha Tyagi

    2011-01-01

    With the increase in the bandwidth & the transmission speed over the internet, transmission of multimedia objects like video, audio, images has become an easier work. In this paper we provide an approach that can be useful for transmission of video objects over the internet without much fuzz. The approach provides a ontology based framework that is used to establish an automatic deployment of video transmission system. Further the video is compressed using the structural flow mechanism tha...

  18. Manual versus Automated Rodent Behavioral Assessment: Comparing Efficacy and Ease of Bederson and Garcia Neurological Deficit Scores to an Open Field Video-Tracking System

    Directory of Open Access Journals (Sweden)

    Fiona A. Desland

    2014-01-01

    Full Text Available Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garcia scales did not show significant differences between pre- and post-stroke animals in a small cohort. When using the same cohort, however, post-stroke data obtained from automated open field analysis showed significant differences in several parameters. Furthermore, large cohort analysis also demonstrated increased sensitivity with automated open field analysis versus the Bederson and Garcia scales. These early data indicate use of automated open field analysis software may provide a more sensitive assessment when compared to traditional Bederson and Garcia scales.

  19. Video Texture Synthesis Based on Flow-Like Stylization Painting

    Directory of Open Access Journals (Sweden)

    Qian Wenhua

    2014-01-01

    Full Text Available The paper presents an NP-video rendering system based on natural phenomena. It provides a simple nonphotorealistic video synthesis system in which user can obtain a flow-like stylization painting and infinite video scene. Firstly, based on anisotropic Kuwahara filtering in conjunction with line integral convolution, the phenomena video scene can be rendered to flow-like stylization painting. Secondly, the methods of frame division, patches synthesis, will be used to synthesize infinite playing video. According to selection examples from different natural video texture, our system can generate stylized of flow-like and infinite video scenes. The visual discontinuities between neighbor frames are decreased, and we also preserve feature and details of frames. This rendering system is easy and simple to implement.

  20. Influence of complementing a robotic upper limb rehabilitation system with video games on the engagement of the participants: a study focusing on muscle activities.

    Science.gov (United States)

    Li, Chong; Rusák, Zoltán; Horváth, Imre; Ji, Linhong

    2014-12-01

    Efficacious stroke rehabilitation depends not only on patients' medical treatment but also on their motivation and engagement during rehabilitation exercises. Although traditional rehabilitation exercises are often mundane, technology-assisted upper-limb robotic training can provide engaging and task-oriented training in a natural environment. The factors that influence engagement, however, are not fully understood. This paper therefore studies the relationship between engagement and muscle activities as well as the influencing factors of engagement. To this end, an experiment was conducted using a robotic upper limb rehabilitation system with healthy individuals in three training exercises: (a) a traditional exercise, which is typically used for training the grasping function, (b) a tracking exercise, currently used in robot-assisted stroke patient rehabilitation for fine motor movement, and (c) a video game exercise, which is a proliferating approach of robot-assisted rehabilitation enabling high-level active engagement of stroke patients. These exercises differ not only in the characteristics of the motion that they use but also in their method of triggering engagement. To measure the level of engagement, we used facial expressions, motion analysis of the arm movements, and electromyography. The results show that (a) the video game exercise could engage the participants for a longer period than the other two exercises, (b) the engagement level decreased when the participants became too familiar with the exercises, and (c) analysis of normalized root mean square in electromyographic data indicated that muscle activities were more intense when the participants are engaged. This study shows that several sub-factors on engagement, such as versatility of feedback, cognitive tasks, and competitiveness, may influence engagement more than the others. To maintain a high level of engagement, the rehabilitation system needs to be adaptive, providing different exercises to

  1. A near infra-red video system as a protective diagnostic for electron cyclotron resonance heating operation in the Wendelstein 7-X stellarator.

    Science.gov (United States)

    Preynas, M; Laqua, H P; Marsen, S; Reintrog, A; Corre, Y; Moncada, V; Travere, J-M

    2015-11-01

    The Wendelstein 7-X stellarator is a large nuclear fusion device based at Max-Planck-Institut für Plasmaphysik in Greifswald in Germany. The main plasma heating system for steady state operation in W7-X is electron cyclotron resonance heating (ECRH). During operation, part of plama facing components will be directly heated by the non-absorbed power of 1 MW rf beams of ECRH. In order to avoid damages of such components made of graphite tiles during the first operational phase, a near infra-red video system has been developed as a protective diagnostic for safe and secure ECRH operation. Both the mechanical design housing the camera and the optical system are very flexible and respect the requirements of steady state operation. The full system including data acquisition and control system has been successfully tested in the vacuum vessel, including on-line visualization and data storage of the four cameras equipping the ECRH equatorial launchers of W7-X.

  2. A near infra-red video system as a protective diagnostic for electron cyclotron resonance heating operation in the Wendelstein 7-X stellarator

    Energy Technology Data Exchange (ETDEWEB)

    Preynas, M.; Laqua, H. P.; Marsen, S.; Reintrog, A. [Max-Planck-Institut für Plasmaphysik (IPP), D-17491 Greifswald (Germany); Corre, Y.; Moncada, V.; Travere, J.-M. [IRFM, CEA-Cadarache, 13108 Saint Paul lez Durance Cedex (France)

    2015-11-15

    The Wendelstein 7-X stellarator is a large nuclear fusion device based at Max-Planck-Institut für Plasmaphysik in Greifswald in Germany. The main plasma heating system for steady state operation in W7-X is electron cyclotron resonance heating (ECRH). During operation, part of plama facing components will be directly heated by the non-absorbed power of 1 MW rf beams of ECRH. In order to avoid damages of such components made of graphite tiles during the first operational phase, a near infra-red video system has been developed as a protective diagnostic for safe and secure ECRH operation. Both the mechanical design housing the camera and the optical system are very flexible and respect the requirements of steady state operation. The full system including data acquisition and control system has been successfully tested in the vacuum vessel, including on-line visualization and data storage of the four cameras equipping the ECRH equatorial launchers of W7-X.

  3. Characterizing popularity dynamics of online videos

    Science.gov (United States)

    Ren, Zhuo-Ming; Shi, Yu-Qiang; Liao, Hao

    2016-07-01

    Online popularity has a major impact on videos, music, news and other contexts in online systems. Characterizing online popularity dynamics is nature to explain the observed properties in terms of the already acquired popularity of each individual. In this paper, we provide a quantitative, large scale, temporal analysis of the popularity dynamics in two online video-provided websites, namely MovieLens and Netflix. The two collected data sets contain over 100 million records and even span a decade. We characterize that the popularity dynamics of online videos evolve over time, and find that the dynamics of the online video popularity can be characterized by the burst behaviors, typically occurring in the early life span of a video, and later restricting to the classic preferential popularity increase mechanism.

  4. Orbiter CCTV video signal noise analysis

    Science.gov (United States)

    Lawton, R. M.; Blanke, L. R.; Pannett, R. F.

    1977-01-01

    The amount of steady state and transient noise which will couple to orbiter CCTV video signal wiring is predicted. The primary emphasis is on the interim system, however, some predictions are made concerning the operational system wiring in the cabin area. Noise sources considered are RF fields from on board transmitters, precipitation static, induced lightning currents, and induced noise from adjacent wiring. The most significant source is noise coupled to video circuits from associated circuits in common connectors. Video signal crosstalk is the primary cause of steady state interference, and mechanically switched control functions cause the largest induced transients.

  5. An overview of new video techniques

    CERN Document Server

    Parker, R

    1999-01-01

    Current video transmission and distribution systems at CERN use a variety of analogue techniques which are several decades old. It will soon be necessary to replace this obsolete equipment, and the opportunity therefore exists to rationalize the diverse systems now in place. New standards for digital transmission and distribution are now emerging. This paper gives an overview of these new standards and of the underlying technology common to many of them. The paper reviews Digital Video Broadcasting (DVB), the Motion Picture Experts Group specifications (MPEG1, MPEG2, MPEG4, and MPEG7), videoconferencing standards (H.261 etc.), and packet video systems, together with predictions of the penetration of these standards into the consumer market. The digital transport mechanisms now available (IP, SDH, ATM) are also reviewed, and the implication of widespread adoption of these systems on video transmission and distribution is analysed.

  6. Authentication for Propulsion Test Streaming Video Project

    Data.gov (United States)

    National Aeronautics and Space Administration — An application was developed that could enforce two-factor authentication for NASA access to the Propulsion Test Streaming Video System.  To gain access to the...

  7. Color spaces in digital video

    Energy Technology Data Exchange (ETDEWEB)

    Gaunt, R.

    1997-05-01

    Whether it`s photography, computer graphics, publishing, or video; each medium has a defined color space, or gamut, which defines the extent that a given set of RGB colors can be mixed. When converting from one medium to another, an image must go through some form of conversion which maps colors into the destination color space. The conversion process isn`t always straight forward, easy, or reversible. In video, two common analog composite color spaces are Y`tjv (used in PAL) and Y`IQ (used in NTSC). These two color spaces have been around since the beginning of color television, and are primarily used in video transmission. Another analog scheme used in broadcast studios is Y`, R`-Y`, B`-Y` (used in Betacam and Mll) which is a component format. Y`, R`-Y`,B`-Y` maintains the color information of RGB but in less space. From this, the digital component video specification, ITU-Rec. 601-4 (formerly CCIR Rec. 601) was based. The color space for Rec. 601 is symbolized as Y`CbCr. Digital video formats such as DV, Dl, Digital-S, etc., use Rec. 601 to define their color gamut. Digital composite video (for D2 tape) is digitized analog Y`UV and is seeing decreased use. Because so much information is contained in video, segments of any significant length usually require some form of data compression. All of the above mentioned analog video formats are a means of reducing the bandwidth of RGB video. Video bulk storage devices, such as digital disk recorders, usually store frames in Y`CbCr format, even if no other compression method is used. Computer graphics and computer animations originate in RGB format because RGB must be used to calculate lighting and shadows. But storage of long animations in RGB format is usually cost prohibitive and a 30 frame-per-second data rate of uncompressed RGB is beyond most computers. By taking advantage of certain aspects of the human visual system, true color 24-bit RGB video images can be compressed with minimal loss of visual information

  8. Understanding Video Games

    DEFF Research Database (Denmark)

    Heide Smith, Jonas; Tosca, Susana Pajares; Egenfeldt-Nielsen, Simon

    From Pong to PlayStation 3 and beyond, Understanding Video Games is the first general introduction to the exciting new field of video game studies. This textbook traces the history of video games, introduces the major theories used to analyze games such as ludology and narratology, reviews...... the economics of the game industry, examines the aesthetics of game design, surveys the broad range of game genres, explores player culture, and addresses the major debates surrounding the medium, from educational benefits to the effects of violence. Throughout the book, the authors ask readers to consider...... larger questions about the medium: * What defines a video game? * Who plays games? * Why do we play games? * How do games affect the player? Extensively illustrated, Understanding Video Games is an indispensable and comprehensive resource for those interested in the ways video games are reshaping...

  9. Collaborative Video Sketching

    DEFF Research Database (Denmark)

    Henningsen, Birgitte; Gundersen, Peter Bukovica; Hautopp, Heidi

    2017-01-01

    This paper introduces to what we define as a collaborative video sketching process. This process links various sketching techniques with digital storytelling approaches and creative reflection processes in video productions. Traditionally, sketching has been used by designers across various...... forms and through empirical examples, we present and discuss the video recording of sketching sessions, as well as development of video sketches by rethinking, redoing and editing the recorded sessions. The empirical data is based on workshop sessions with researchers and students from universities...... and university colleges and primary and secondary school teachers. As researchers, we have had different roles in these action research case studies where various video sketching techniques were applied.The analysis illustrates that video sketching can take many forms, and two common features are important...

  10. Reflections on academic video

    Directory of Open Access Journals (Sweden)

    Thommy Eriksson

    2012-11-01

    Full Text Available As academics we study, research and teach audiovisual media, yet rarely disseminate and mediate through it. Today, developments in production technologies have enabled academic researchers to create videos and mediate audiovisually. In academia it is taken for granted that everyone can write a text. Is it now time to assume that everyone can make a video essay? Using the online journal of academic videos Audiovisual Thinking and the videos published in it as a case study, this article seeks to reflect on the emergence and legacy of academic audiovisual dissemination. Anchoring academic video and audiovisual dissemination of knowledge in two critical traditions, documentary theory and semiotics, we will argue that academic video is in fact already present in a variety of academic disciplines, and that academic audiovisual essays are bringing trends and developments that have long been part of academic discourse to their logical conclusion.

  11. Refilming with depth-inferred videos.

    Science.gov (United States)

    Zhang, Guofeng; Dong, Zilong; Jia, Jiaya; Wan, Liang; Wong, Tien-Tsin; Bao, Hujun

    2009-01-01

    Compared to still image editing, content-based video editing faces the additional challenges of maintaining the spatiotemporal consistency with respect to geometry. This brings up difficulties of seamlessly modifying video content, for instance, inserting or removing an object. In this paper, we present a new video editing system for creating spatiotemporally consistent and visually appealing refilming effects. Unlike the typical filming practice, our system requires no labor-intensive construction of 3D models/surfaces mimicking the real scene. Instead, it is based on an unsupervised inference of view-dependent depth maps for all video frames. We provide interactive tools requiring only a small amount of user input to perform elementary video content editing, such as separating video layers, completing background scene, and extracting moving objects. These tools can be utilized to produce a variety of visual effects in our system, including but not limited to video composition, "predator" effect, bullet-time, depth-of-field, and fog synthesis. Some of the effects can be achieved in real time.

  12. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  13. Green Power Partnership Videos

    Science.gov (United States)

    The Green Power Partnership develops videos on a regular basis that explore a variety of topics including, Green Power partnership, green power purchasing, Renewable energy certificates, among others.

  14. Evaluation of a virtual pet visit system with live video streaming of patient images over the Internet in a companion animal intensive care unit in the Netherlands.

    Science.gov (United States)

    Robben, Joris H; Melsen, Diede N; Almalik, Osama; Roomer, Wendy; Endenburg, Nienke

    2016-05-01

    To evaluate the impact of a virtual pet visit system ("TelePet" System, TPS) on owners and staff of a companion animal ICU. Longitudinal interventional study (2010-2013). Companion animal ICU at a university veterinary medical teaching hospital. Pet owners, ICU technicians. The introduction of the TPS, with live video streaming of patient images over the Internet, in a companion animal ICU. Pet owners experienced TPS as a valuable extra service. Most TPS users (72.4%) experienced less anxiety and felt less need (40.4% of TPS users) to visit their hospitalized pet in person. Most users (83.5%) shared TPS access with their family. The introduction of the TPS did not improve overall owner satisfaction, except for the score on "quality of medical treatment." Seven of 26 indicators of owner satisfaction were awarded higher scores by TPS users than by TPS nonusers in the survey after the introduction of the system. However, the lack of randomization of owners might have influenced findings. The enthusiasm of the ICU technicians for the system was tempered by the negative feedback from a small number of owners. Nevertheless they recognized the value of the system for owners. The system was user friendly and ICU staff and TPS users experienced few technical problems. As veterinary healthcare is moving toward a more client-centered approach, a virtual pet visit system, such as TPS, is a relatively simple application that may improve the well-being of most owners during the hospitalization of their pet. © Veterinary Emergency and Critical Care Society 2016.

  15. Automated Identification and Reconstruction of YouTube Video Access

    Directory of Open Access Journals (Sweden)

    Jonathan Patterson

    2012-06-01

    Full Text Available YouTube is one of the most popular video-sharing websites on the Internet, allowing users to upload, view and share videos with other users all over the world. YouTube contains many different types of videos, from homemade sketches to instructional and educational tutorials, and therefore attracts a wide variety of users with different interests. The majority of YouTube visits are perfectly innocent, but there may be circumstances where YouTube video access is related to a digital investigation, e.g. viewing instructional videos on how to perform potentially unlawful actions or how to make unlawful articles.When a user accesses a YouTube video through their browser, certain digital artefacts relating to that video access may be left on their system in a number of different locations. However, there has been very little research published in the area of YouTube video artefacts.The paper discusses the identification of some of the artefacts that are left by the Internet Explorer web browser on a Windows system after accessing a YouTube video. The information that can be recovered from these artefacts can include the video ID, the video name and possibly a cached copy of the video itself. In addition to identifying the artefacts that are left, the paper also investigates how these artefacts can be brought together and analysed to infer specifics about the user’s interaction with the YouTube website, for example whether the video was searched for or visited as a result of a suggestion after viewing a previous video.The result of this research is a Python based prototype that will analyse a mounted disk image, automatically extract the artefacts related to YouTube visits and produce a report summarising the YouTube video accesses on a system.

  16. Color Helmet Mounted Display System with Real Time Computer Generated and Video Imagery for In-Flight Simulation

    Science.gov (United States)

    Sawyer, Kevin; Jacobsen, Robert; Aiken, Edwin W. (Technical Monitor)

    1995-01-01

    NASA Ames Research Center and the US Army are developing the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) using a Sikorsky UH-60 helicopter for the purpose of flight systems research. A primary use of the RASCAL is in-flight simulation for which the visual scene will use computer generated imagery and synthetic vision. This research is made possible in part to a full color wide field of view Helmet Mounted Display (HMD) system that provides high performance color imagery suitable for daytime operations in a flight-rated package. This paper describes the design and performance characteristics of the HMD system. Emphasis is placed on the design specifications, testing, and integration into the aircraft of Kaiser Electronics' RASCAL HMD system that was designed and built under contract for NASA. The optical performance and design of the Helmet mounted display unit will be discussed as well as the unique capabilities provided by the system's Programmable Display Generator (PDG).

  17. Perceptual compressive sensing scalability in mobile video

    Science.gov (United States)

    Bivolarski, Lazar

    2011-09-01

    Scalability features embedded within the video sequences allows for streaming over heterogeneous networks to a variety of end devices. Compressive sensing techniques that will allow for lowering the complexity increase the robustness of the video scalability are reviewed. Human visual system models are often used in establishing perceptual metrics that would evaluate quality of video. Combining of perceptual and compressive sensing approach outlined from recent investigations. The performance and the complexity of different scalability techniques are evaluated. Application of perceptual models to evaluation of the quality of compressive sensing scalability is considered in the near perceptually lossless case and to the appropriate coding schemes is reviewed.

  18. A System for Class Reflection Using iPads for Real-Time Bookmarking of Feedbacks into Simultaneously Recorded Videos

    Science.gov (United States)

    Nakajima, Taira

    2017-01-01

    The author demonstrates a new system useful for class reflection inspired from "lesson study," a process in which teachers jointly plan, observe, analyze, and refine actual classroom lessons called "research lessons". Our new system offers an environment that one can use an iPad to bookmark observers' feedbacks into…

  19. Performance evaluation software moving object detection and tracking in videos

    CERN Document Server

    Karasulu, Bahadir

    2013-01-01

    Performance Evaluation Software: Moving Object Detection and Tracking in Videos introduces a software approach for the real-time evaluation and performance comparison of the methods specializing in moving object detection and/or tracking (D&T) in video processing. Digital video content analysis is an important item for multimedia content-based indexing (MCBI), content-based video retrieval (CBVR) and visual surveillance systems. There are some frequently-used generic algorithms for video object D&T in the literature, such as Background Subtraction (BS), Continuously Adaptive Mean-shift (CMS),

  20. TNO at TRECVID 2008, Combining Audio and Video Fingerprinting for Robust Copy Detection

    NARCIS (Netherlands)

    Doets, P.J.; Eendebak, P.T.; Ranguelova, E.; Kraaij, W.

    2009-01-01

    TNO has evaluated a baseline audio and a video fingerprinting system based on robust hashing for the TRECVID 2008 copy detection task. We participated in the audio, the video and the combined audio-video copy detection task. The audio fingerprinting implementation clearly outperformed the video

  1. Streaming Videos in Peer Assessment to Support Training Pre-Service Teachers

    Science.gov (United States)

    Wu, Cheng-Chih; Kao, Hue-Ching

    2008-01-01

    A web-based peer assessment system using video streaming technology was implemented to support the training of pre-service teachers. The peer assessment process was synchronized with viewing of peer teaching videos so that comments could be linked to the relevant position on the video. When one viewed a comment, the associated video segment could…

  2. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer Support Program Community Connections Overview ... group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork Peer Support Program ...

  3. Reviews in instructional video

    NARCIS (Netherlands)

    van der Meij, Hans

    2017-01-01

    This study investigates the effectiveness of a video tutorial for software training whose construction was based on a combination of insights from multimedia learning and Demonstration-Based Training. In the videos, a model of task performance was enhanced with instructional features that were

  4. AudioMove Video

    DEFF Research Database (Denmark)

    2012-01-01

    Live drawing video experimenting with low tech techniques in the field of sketching and visual sense making. In collaboration with Rune Wehner and Teater Katapult.......Live drawing video experimenting with low tech techniques in the field of sketching and visual sense making. In collaboration with Rune Wehner and Teater Katapult....

  5. Making Good Physics Videos

    Science.gov (United States)

    Lincoln, James

    2017-01-01

    Online videos are an increasingly important way technology is contributing to the improvement of physics teaching. Students and teachers have begun to rely on online videos to provide them with content knowledge and instructional strategies. Online audiences are expecting greater production value, and departments are sometimes requesting educators…

  6. SECRETS OF SONG VIDEO

    Directory of Open Access Journals (Sweden)

    Chernyshov Alexander V.

    2014-04-01

    Full Text Available The article focuses on the origins of the song videos as TV and Internet-genre. In addition, it considers problems of screen images creation depending on the musical form and the text of a songs in connection with relevant principles of accent and phraseological video editing and filming techniques as well as with additional frames and sound elements.

  7. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer ... group for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork ...

  8. Personal Digital Video Stories

    DEFF Research Database (Denmark)

    Ørngreen, Rikke; Henningsen, Birgitte Sølbeck; Louw, Arnt Vestergaard

    2016-01-01

    agenda focusing on video productions in combination with digital storytelling, followed by a presentation of the digital storytelling features. The paper concludes with a suggestion to initiate research in what is identified as Personal Digital Video (PDV) Stories within longitudinal settings, while...

  9. The Video Generation.

    Science.gov (United States)

    Provenzo, Eugene F., Jr.

    1992-01-01

    Video games are neither neutral nor harmless but represent very specific social and symbolic constructs. Research on the social content of today's video games reveals that sex bias and gender stereotyping are widely evident throughout the Nintendo games. Violence and aggression also pervade the great majority of the games. (MLF)

  10. Display device-adapted video quality-of-experience assessment

    Science.gov (United States)

    Rehman, Abdul; Zeng, Kai; Wang, Zhou

    2015-03-01

    Today's viewers consume video content from a variety of connected devices, including smart phones, tablets, notebooks, TVs, and PCs. This imposes significant challenges for managing video traffic efficiently to ensure an acceptable quality-of-experience (QoE) for the end users as the perceptual quality of video content strongly depends on the properties of the display device and the viewing conditions. State-of-the-art full-reference objective video quality assessment algorithms do not take into account the combined impact of display device properties, viewing conditions, and video resolution while performing video quality assessment. We performed a subjective study in order to understand the impact of aforementioned factors on perceptual video QoE. We also propose a full reference video QoE measure, named SSIMplus, that provides real-time prediction of the perceptual quality of a video based on human visual system behaviors, video content characteristics (such as spatial and temporal complexity, and video resolution), display device properties (such as screen size, resolution, and brightness), and viewing conditions (such as viewing distance and angle). Experimental results have shown that the proposed algorithm outperforms state-of-the-art video quality measures in terms of accuracy and speed.

  11. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos ... member of our patient care team. Managing Your Arthritis Managing Your Arthritis Managing Chronic Pain and Depression ...

  12. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed to help you learn more about Rheumatoid Arthritis (RA). You will learn how the diagnosis of ...

  13. Rheumatoid Arthritis Educational Video Series

    Science.gov (United States)

    ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed to help you learn more about Rheumatoid Arthritis (RA). You will learn how the diagnosis of ...

  14. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... questions Clinical Studies Publications Catalog Photos and Images Spanish Language Information Grants and Funding Extramural Research Division ... Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video ...

  15. Social video content delivery

    CERN Document Server

    Wang, Zhi; Zhu, Wenwu

    2016-01-01

    This brief presents new architecture and strategies for distribution of social video content. A primary framework for socially-aware video delivery and a thorough overview of the possible approaches is provided. The book identifies the unique characteristics of socially-aware video access and social content propagation, revealing the design and integration of individual modules that are aimed at enhancing user experience in the social network context. The change in video content generation, propagation, and consumption for online social networks, has significantly challenged the traditional video delivery paradigm. Given the massive amount of user-generated content shared in online social networks, users are now engaged as active participants in the social ecosystem rather than as passive receivers of media content. This revolution is being driven further by the deep penetration of 3G/4G wireless networks and smart mobile devices that are seamlessly integrated with online social networking and media-sharing s...

  16. Video-assisted thoracoscopic surgery under O-arm navigation system guidance for the treatment of thoracic disk herniations: surgical techniques and early clinical results.

    Science.gov (United States)

    Hur, Jung-Woo; Kim, Jin-Sung; Cho, Dong-Young; Shin, Jong-Mok; Lee, Jun-Ho; Lee, Sang-Ho

    2014-11-01

    This study describes the surgical technique and clinical results of video-assisted thoracoscopic surgery (VATS) assisted by an O-arm-based navigation system, used for the treatment of thoracic disk herniation (TDH). The trend toward the use of minimally invasive procedures with endoscopic visualization of the thoracic cavity in thoracic spine surgery has evolved. It is difficult to develop a new set of visuomotor skills unique to endoscopic procedures and understand the three-dimensional (3D) anatomy while performing a two-dimensional (2D) imaging procedure. Adding image guidance would have a positive impact on these procedures, making them safer and more precise. We report the results of 10 patients who underwent diskectomy for TDH using VATS assisted by an O-arm-based navigation system and describe the surgical technique. The average duration of the symptoms was 2.8 years; average operation time, 326.9 minutes; and average additional time required for the image guidance surgery using the O-arm-based navigation, ∼ 29.4 minutes. No complications occurred during the surgical procedure or the immediate postoperative period. The advantages of using navigational assistance during the surgical procedure include better visualization of the operative field, more accurate surgical planning, and optimization of the surgical approach involving the establishment of the correct drilling trajectory and safe decompression of the spinal cord, as well as the possibility of intraoperative control of bone resection. Georg Thieme Verlag KG Stuttgart · New York.

  17. Tv & video engineer's reference book

    CERN Document Server

    Jackson, K G

    1991-01-01

    TV & Video Engineer's Reference Book presents an extensive examination of the basic television standards and broadcasting spectrum. It discusses the fundamental concepts in analogue and digital circuit theory. It addresses studies in the engineering mathematics, formulas, and calculations. Some of the topics covered in the book are the conductors and insulators, passive components, alternating current circuits; broadcast transmission; radio frequency propagation; electron optics in cathode ray tube; color encoding and decoding systems; television transmitters; and remote supervision of unatten

  18. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  19. Turning Video Resource Management into Cloud Computing

    Directory of Open Access Journals (Sweden)

    Weili Kou

    2016-07-01

    Full Text Available Big data makes cloud computing more and more popular in various fields. Video resources are very useful and important to education, security monitoring, and so on. However, issues of their huge volumes, complex data types, inefficient processing performance, weak security, and long times for loading pose challenges in video resource management. The Hadoop Distributed File System (HDFS is an open-source framework, which can provide cloud-based platforms and presents an opportunity for solving these problems. This paper presents video resource management architecture based on HDFS to provide a uniform framework and a five-layer model for standardizing the current various algorithms and applications. The architecture, basic model, and key algorithms are designed for turning video resources into a cloud computing environment. The design was tested by establishing a simulation system prototype.

  20. Implementation of multistandard video signals integrator

    Science.gov (United States)

    Zabołotny, Wojciech M.; Pastuszak, Grzegorz; Sokół, Grzegorz; Borowik, Grzegorz; GÄ ska, Michał; Kasprowicz, Grzegorz H.; Poźniak, Krzysztof T.; Abramowski, Andrzej; Buchowicz, Andrzej; Trochimiuk, Maciej; Frasunek, Przemysław; Jurkiewicz, Rafał; Nalbach-Moszynska, Małgorzata; Wawrzusiak, Radosław; Bukowiecka, Danuta; Tyburska, Agata; Struniawski, Jarosław; Jastrzebski, Paweł; Jewartowski, BłaŻej; Brawata, Sebastian; Bubak, Iwona; Gloza, Małgorzata

    2017-08-01

    The paper describes the prototype implemetantion of the Video Signals Integrator (VSI). The function of the system is to integrate video signals from many sources. The VSI is a complex hybrid system consisting of hardware, firmware and software components. Its creation requires joint effort of experts from different areas. The VSI capture device is a portable hardware device responsible for capturing of video signals from different different sources and in various formats, and for transmitting them to the server. The NVR server aggregates video and control streams coming from different sources and multiplexes them into logical channels with each channel representing a single source. From there each channel can be distributed further to the end clients (consoles) for live display via a number of RTSP servers. The end client can, at the same time, inject control messages into a given channel to control movement of a CCTV camera.