WorldWideScience

Sample records for control video cameras

  1. Camera Control and Geo-Registration for Video Sensor Networks

    Science.gov (United States)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  2. Remote control video cameras on a suborbital rocket

    International Nuclear Information System (INIS)

    Wessling, Francis C.

    1997-01-01

    Three video cameras were controlled in real time from the ground to a sub-orbital rocket during a fifteen minute flight from White Sands Missile Range in New Mexico. Telemetry communications with the rocket allowed the control of the cameras. The pan, tilt, zoom, focus, and iris of two of the camera lenses, the power and record functions of the three cameras, and also the analog video signal that would be sent to the ground was controlled by separate microprocessors. A microprocessor was used to record data from three miniature accelerometers, temperature sensors and a differential pressure sensor. In addition to the selected video signal sent to the ground and recorded there, the video signals from the three cameras also were recorded on board the rocket. These recorders were mounted inside the pressurized segment of the rocket payload. The lenses, lens control mechanisms, and the three small television cameras were located in a portion of the rocket payload that was exposed to the vacuum of space. The accelerometers were also exposed to the vacuum of space

  3. Video Sharing System Based on Wi-Fi Camera

    OpenAIRE

    Qidi Lin; Hewei Yu; Jinbin Huang; Weile Liang

    2015-01-01

    This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition, it is able to send commands to camera and control the camera's holder to rotate. The platform can be applied to interactive teaching and dangerous area's monitoring and so on. Testing results show that the platform can share ...

  4. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  5. Video camera use at nuclear power plants

    International Nuclear Information System (INIS)

    Estabrook, M.L.; Langan, M.O.; Owen, D.E.

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs

  6. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  7. Initial laboratory evaluation of color video cameras: Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  8. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  9. Video Chat with Multiple Cameras

    OpenAIRE

    MacCormick, John

    2012-01-01

    The dominant paradigm for video chat employs a single camera at each end of the conversation, but some conversations can be greatly enhanced by using multiple cameras at one or both ends. This paper provides the first rigorous investigation of multi-camera video chat, concentrating especially on the ability of users to switch between views at either end of the conversation. A user study of 23 individuals analyzes the advantages and disadvantages of permitting a user to switch between views at...

  10. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  11. Acceptance/operational test procedure 241-AN-107 Video Camera System

    International Nuclear Information System (INIS)

    Pedersen, L.T.

    1994-01-01

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer's specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights

  12. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  13. Optimization of video capturing and tone mapping in video camera systems

    NARCIS (Netherlands)

    Cvetkovic, S.D.

    2011-01-01

    Image enhancement techniques are widely employed in many areas of professional and consumer imaging, machine vision and computational imaging. Image enhancement techniques used in surveillance video cameras are complex systems involving controllable lenses, sensors and advanced signal processing. In

  14. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  15. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  16. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    Science.gov (United States)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  17. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  18. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  19. CameraCast: flexible access to remote video sensors

    Science.gov (United States)

    Kong, Jiantao; Ganev, Ivan; Schwan, Karsten; Widener, Patrick

    2007-01-01

    New applications like remote surveillance and online environmental or traffic monitoring are making it increasingly important to provide flexible and protected access to remote video sensor devices. Current systems use application-level codes like web-based solutions to provide such access. This requires adherence to user-level APIs provided by such services, access to remote video information through given application-specific service and server topologies, and that the data being captured and distributed is manipulated by third party service codes. CameraCast is a simple, easily used system-level solution to remote video access. It provides a logical device API so that an application can identically operate on local vs. remote video sensor devices, using its own service and server topologies. In addition, the application can take advantage of API enhancements to protect remote video information, using a capability-based model for differential data protection that offers fine grain control over the information made available to specific codes or machines, thereby limiting their ability to violate privacy or security constraints. Experimental evaluations of CameraCast show that the performance of accessing remote video information approximates that of accesses to local devices, given sufficient networking resources. High performance is also attained when protection restrictions are enforced, due to an efficient kernel-level realization of differential data protection.

  20. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  1. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  2. Video segmentation and camera motion characterization using compressed data

    Science.gov (United States)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  3. Development and setting of a time-lapse video camera system for the Antarctic lake observation

    Directory of Open Access Journals (Sweden)

    Sakae Kudoh

    2010-11-01

    Full Text Available A submersible video camera system, which aimed to record the growth image of aquatic vegetation in Antarctic lakes for one year, was manufactured. The system consisted of a video camera, a programmable controller unit, a lens-cleaning wiper with a submersible motor, LED lights, and a lithium ion battery unit. Changes of video camera (High Vision System and modification of the lens-cleaning wiper allowed higher sensitivity and clearer recording images compared to the previous submersible video without increasing the power consumption. This system was set on the lake floor in Lake Naga Ike (a tentative name in Skarvsnes in Soya Coast, during the summer activity of the 51th Japanese Antarctic Research Expedition. Interval record of underwater visual image for one year have been started by our diving operation.

  4. Video astronomy on the go using video cameras with small telescopes

    CERN Document Server

    Ashley, Joseph

    2017-01-01

    Author Joseph Ashley explains video astronomy's many benefits in this comprehensive reference guide for amateurs. Video astronomy offers a wonderful way to see objects in far greater detail than is possible through an eyepiece, and the ability to use the modern, entry-level video camera to image deep space objects is a wonderful development for urban astronomers in particular, as it helps sidestep the issue of light pollution. The author addresses both the positive attributes of these cameras for deep space imaging as well as the limitations, such as amp glow. The equipment needed for imaging as well as how it is configured is identified with hook-up diagrams and photographs. Imaging techniques are discussed together with image processing (stacking and image enhancement). Video astronomy has evolved to offer great results and great ease of use, and both novices and more experienced amateurs can use this book to find the set-up that works best for them. Flexible and portable, they open up a whole new way...

  5. Video Surveillance using a Multi-Camera Tracking and Fusion System

    OpenAIRE

    Zhang , Zhong; Scanlon , Andrew; Yin , Weihong; Yu , Li; Venetianer , Péter L.

    2008-01-01

    International audience; Usage of intelligent video surveillance (IVS) systems is spreading rapidly. These systems are being utilized in a wide range of applications. In most cases, even in multi-camera installations, the video is processed independently in each feed. This paper describes a system that fuses tracking information from multiple cameras, thus vastly expanding its capabilities. The fusion relies on all cameras being calibrated to a site map, while the individual sensors remain lar...

  6. Endoscopic Camera Control by Head Movements for Thoracic Surgery

    NARCIS (Netherlands)

    Reilink, Rob; de Bruin, Gart; Franken, M.C.J.; Mariani, Massimo A.; Misra, Sarthak; Stramigioli, Stefano

    2010-01-01

    In current video-assisted thoracic surgery, the endoscopic camera is operated by an assistant of the surgeon, which has several disadvantages. This paper describes a system which enables the surgeon to control the endoscopic camera without the help of an assistant. The system is controlled using

  7. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    Science.gov (United States)

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro ® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro ® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro ® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro ® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro ® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  8. Video content analysis on body-worn cameras for retrospective investigation

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; ter Haar, Frank B.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Burghouts, Gertjan J.; Wijn, Remco; van den Broek, Sebastiaan P.; van Rest, Jeroen H. C.

    2015-10-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications and effects, such as the reduction of violence between police and citizens. However, the increased use of bodycams also creates potential challenges. For example: how can end-users extract information from the abundance of video, how can the information be presented, and how can an officer retrieve information efficiently? Nevertheless, such video gives the opportunity to stimulate the professionals' memory, and support complete and accurate reporting. In this paper, we show how video content analysis (VCA) can address these challenges and seize these opportunities. To this end, we focus on methods for creating a complete summary of the video, which allows quick retrieval of relevant fragments. The content analysis for summarization consists of several components, such as stabilization, scene selection, motion estimation, localization, pedestrian tracking and action recognition in the video from a bodycam. The different components and visual representations of summaries are presented for retrospective investigation.

  9. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1995-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser

  10. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    Directory of Open Access Journals (Sweden)

    Semi Jeon

    2017-02-01

    Full Text Available Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i robust feature detection using particle keypoints between adjacent frames; (ii camera path estimation and smoothing; and (iii rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV. The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.

  11. Interaction Control Protocols for Distributed Multi-user Multi-camera Environments

    Directory of Open Access Journals (Sweden)

    Gareth W Daniel

    2003-10-01

    Full Text Available Video-centred communication (e.g., video conferencing, multimedia online learning, traffic monitoring, and surveillance is becoming a customary activity in our lives. The management of interactions in such an environment is a complicated HCI issue. In this paper, we present our study on a collection of interaction control protocols for distributed multiuser multi-camera environments. These protocols facilitate different approaches to managing a user's entitlement for controlling a particular camera. We describe a web-based system that allows multiple users to manipulate multiple cameras in varying remote locations. The system was developed using the Java framework, and all protocols discussed have been incorporated into the system. Experiments were designed and conducted to evaluate the effectiveness of these protocols, and to enable the identification of various human factors in a distributed multi-user and multi-camera environment. This work provides an insight into the complexity associated with the interaction management in video-centred communication. It can also serve as a conceptual and experimental framework for further research in this area.

  12. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  13. A model for measurement of noise in CCD digital-video cameras

    International Nuclear Information System (INIS)

    Irie, K; Woodhead, I M; McKinnon, A E; Unsworth, K

    2008-01-01

    This study presents a comprehensive measurement of CCD digital-video camera noise. Knowledge of noise detail within images or video streams allows for the development of more sophisticated algorithms for separating true image content from the noise generated in an image sensor. The robustness and performance of an image-processing algorithm is fundamentally limited by sensor noise. The individual noise sources present in CCD sensors are well understood, but there has been little literature on the development of a complete noise model for CCD digital-video cameras, incorporating the effects of quantization and demosaicing

  14. Real-time pedestrian detection with the videos of car camera

    Directory of Open Access Journals (Sweden)

    Yunling Zhang

    2015-12-01

    Full Text Available Pedestrians in the vehicle path are in danger of being hit, thus causing severe injury to pedestrians and vehicle occupants. Therefore, real-time pedestrian detection with the video of vehicle-mounted camera is of great significance to vehicle–pedestrian collision warning and traffic safety of self-driving car. In this article, a real-time scheme was proposed based on integral channel features and graphics processing unit. The proposed method does not need to resize the input image. Moreover, the computationally expensive convolution of the detectors and the input image was converted into the dot product of two larger matrixes, which can be computed effectively using a graphics processing unit. The experiments showed that the proposed method could be employed to detect pedestrians in the video of car camera at 20+ frames per second with acceptable error rates. Thus, it can be applied in real-time detection tasks with the videos of car camera.

  15. Localization of cask and plug remote handling system in ITER using multiple video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, João, E-mail: jftferreira@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2013-10-15

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.

  16. Localization of cask and plug remote handling system in ITER using multiple video cameras

    International Nuclear Information System (INIS)

    Ferreira, João; Vale, Alberto; Ribeiro, Isabel

    2013-01-01

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building

  17. Advances in top-down and bottom-up approaches to video-based camera tracking

    OpenAIRE

    Marimón Sanjuán, David

    2007-01-01

    Video-based camera tracking consists in trailing the three dimensional pose followed by a mobile camera using video as sole input. In order to estimate the pose of a camera with respect to a real scene, one or more three dimensional references are needed. Examples of such references are landmarks with known geometric shape, or objects for which a model is generated beforehand. By comparing what is seen by a camera with what is geometrically known from reality, it is possible to recover the po...

  18. Advances in top-down and bottom-up approaches to video-based camera tracking

    OpenAIRE

    Marimón Sanjuán, David; Ebrahimi, Touradj

    2008-01-01

    Video-based camera tracking consists in trailing the three dimensional pose followed by a mobile camera using video as sole input. In order to estimate the pose of a camera with respect to a real scene, one or more three dimensional references are needed. Examples of such references are landmarks with known geometric shape, or objects for which a model is generated beforehand. By comparing what is seen by a camera with what is geometrically known from reality, it is possible to recover the po...

  19. Surgical video recording with a modified GoPro Hero 4 camera.

    Science.gov (United States)

    Lin, Lily Koo

    2016-01-01

    Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination.

  20. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    Science.gov (United States)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  1. In-camera video-stream processing for bandwidth reduction in web inspection

    Science.gov (United States)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  2. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    Science.gov (United States)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  3. Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras

    Science.gov (United States)

    2017-10-01

    ARL-TR-8185 ● OCT 2017 US Army Research Laboratory Field Test Data for Detecting Vibrations of a Building Using High-Speed Video...Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras by Caitlin P Conn and Geoffrey H Goldman Sensors and...June 2016 – October 2017 4. TITLE AND SUBTITLE Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras 5a. CONTRACT

  4. Surgical video recording with a modified GoPro Hero 4 camera

    Directory of Open Access Journals (Sweden)

    Lin LK

    2016-01-01

    Full Text Available Lily Koo Lin Department of Ophthalmology and Vision Science, University of California, Davis Eye Center, Sacramento, CA, USA Background: Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Method: The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Results: Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. Conclusion: The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination. Keywords: teaching, oculoplastic, strabismus

  5. An evaluation of video cameras for collecting observational data on sanctuary-housed chimpanzees (Pan troglodytes).

    Science.gov (United States)

    Hansen, Bethany K; Fultz, Amy L; Hopper, Lydia M; Ross, Stephen R

    2018-05-01

    Video cameras are increasingly being used to monitor captive animals in zoo, laboratory, and agricultural settings. This technology may also be useful in sanctuaries with large and/or complex enclosures. However, the cost of camera equipment and a lack of formal evaluations regarding the use of cameras in sanctuary settings make it challenging for facilities to decide whether and how to implement this technology. To address this, we evaluated the feasibility of using a video camera system to monitor chimpanzees at Chimp Haven. We viewed a group of resident chimpanzees in a large forested enclosure and compared observations collected in person and with remote video cameras. We found that via camera, the observer viewed fewer chimpanzees in some outdoor locations (GLMM post hoc test: est. = 1.4503, SE = 0.1457, Z = 9.951, p sanctuaries to facilitate animal care and observational research. © 2018 Wiley Periodicals, Inc.

  6. Candid camera : video surveillance system can help protect assets

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, L.

    2009-11-15

    By combining closed-circuit cameras with sophisticated video analytics to create video sensors for use in remote areas, Calgary-based IntelliView Technologies Inc.'s explosion-proof video surveillance system can help the oil and gas sector monitor its assets. This article discussed the benefits, features, and applications of IntelliView's technology. Some of the benefits include a reduced need for on-site security and operating personnel and its patented analytics product known as the SmrtDVR, where the camera's images are stored. The technology can be used in temperatures as cold as minus 50 degrees Celsius and as high as 50 degrees Celsius. The product was commercialized in 2006 when it was used by Nexen Inc. It was concluded that false alarms set off by natural occurrences such as rain, snow, glare and shadows were a huge problem with analytics in the past, but that problem has been solved by IntelliView, which has its own source code, and re-programmed code. 1 fig.

  7. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i generation of a three-dimensional (3D human model; (ii human object-based automatic scene calibration; and (iii metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  8. Design and Optimization of the VideoWeb Wireless Camera Network

    Directory of Open Access Journals (Sweden)

    Nguyen HoangThanh

    2010-01-01

    Full Text Available Sensor networks have been a very active area of research in recent years. However, most of the sensors used in the development of these networks have been local and nonimaging sensors such as acoustics, seismic, vibration, temperature, humidity. The emerging development of video sensor networks poses its own set of unique challenges, including high-bandwidth and low latency requirements for real-time processing and control. This paper presents a systematic approach by detailing the design, implementation, and evaluation of a large-scale wireless camera network, suitable for a variety of practical real-time applications. We take into consideration issues related to hardware, software, control, architecture, network connectivity, performance evaluation, and data-processing strategies for the network. We also perform multiobjective optimization on settings such as video resolution and compression quality to provide insight into the performance trade-offs when configuring such a network and present lessons learned in the building and daily usage of the network.

  9. Acceptance/operational test procedure 101-AW tank camera purge system and 101-AW video camera system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1994-01-01

    This procedure will document the satisfactory operation of the 101-AW Tank Camera Purge System (CPS) and the 101-AW Video Camera System. The safety interlock which shuts down all the electronics inside the 101-AW vapor space, during loss of purge pressure, will be in place and tested to ensure reliable performance. This procedure is separated into four sections. Section 6.1 is performed in the 306 building prior to delivery to the 200 East Tank Farms and involves leak checking all fittings on the 101-AW Purge Panel for leakage using a Snoop solution and resolving the leakage. Section 7.1 verifies that PR-1, the regulator which maintains a positive pressure within the volume (cameras and pneumatic lines), is properly set. In addition the green light (PRESSURIZED) (located on the Purge Control Panel) is verified to turn on above 10 in. w.g. and after the time delay (TDR) has timed out. Section 7.2 verifies that the purge cycle functions properly, the red light (PURGE ON) comes on, and that the correct flowrate is obtained to meet the requirements of the National Fire Protection Association. Section 7.3 verifies that the pan and tilt, camera, associated controls and components operate correctly. This section also verifies that the safety interlock system operates correctly during loss of purge pressure. During the loss of purge operation the illumination of the amber light (PURGE FAILED) will be verified

  10. Neutron imaging system based on a video camera

    International Nuclear Information System (INIS)

    Dinca, M.

    2004-01-01

    The non-destructive testing with cold, thermal, epithermal or fast neutrons is nowadays more and more useful because the world-wide level of industrial development requires considerably higher standards of quality of manufactured products and reliability of technological processes especially where any deviation from standards could result in large-scale catastrophic consequences or human loses. Thanks to their properties, easily obtained and very good discrimination of the materials that penetrate, the thermal neutrons are the most used probe. The methods involved for this technique have advanced from neutron radiography based on converter screens and radiological films to neutron radioscopy based on video cameras, that is, from static images to dynamic images. Many neutron radioscopy systems have been used in the past with various levels of success. The quality of an image depends on the quality of the neutron beam and the type of the neutron imaging system. For real time investigations there are involved tube type cameras, CCD cameras and recently CID cameras that capture the image from an appropriate scintillator through the agency of a mirror. The analog signal of the camera is then converted into digital signal by the signal processing technology included into the camera. The image acquisition card or frame grabber from a PC converts the digital signal into an image. The image is formatted and processed by image analysis software. The scanning position of the object is controlled by the computer that commands the electrical motors that move horizontally, vertically and rotate the table of the object. Based on this system, a lot of static image acquisitions, real time non-destructive investigations of dynamic processes and finally, tomographic investigations of the small objects are done in a short time. A system based on a CID camera is presented. Fundamental differences between CCD and CID cameras lie in their pixel readout structure and technique. CIDs

  11. video115_0403 -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  12. GoPro Hero Cameras for Creation of a Three-Dimensional, Educational, Neurointerventional Video.

    Science.gov (United States)

    Park, Min S; Brock, Andrea; Mortimer, Vance; Taussky, Philipp; Couldwell, William T; Quigley, Edward

    2017-10-01

    Neurointerventional education relies on an apprenticeship model, with the trainee observing and participating in procedures with the guidance of a mentor. While educational videos are becoming prevalent in surgical cases, there is a dearth of comparable educational material for trainees in neurointerventional programs. We sought to create a high-quality, three-dimensional video of a routine diagnostic cerebral angiogram for use as an educational tool. A diagnostic cerebral angiogram was recorded using two GoPro HERO 3+ cameras with the Dual HERO System to capture the proceduralist's hands during the case. This video was edited with recordings from the video monitors to create a real-time three-dimensional video of both the actions of the neurointerventionalist and the resulting wire/catheter movements. The final edited video, in either two or three dimensions, can serve as another instructional tool for the training of residents and/or fellows. Additional videos can be created in a similar fashion of more complicated neurointerventional cases. The GoPro HERO 3+ camera and Dual HERO System can be used to create educational videos of neurointerventional procedures.

  13. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    International Nuclear Information System (INIS)

    Werry, S.M.

    1995-01-01

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure

  14. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing

  15. A Reaction-Diffusion-Based Coding Rate Control Mechanism for Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Naoki Wakamiya

    2010-08-01

    Full Text Available A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  16. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    Science.gov (United States)

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  17. Non-mydriatic, wide field, fundus video camera

    Science.gov (United States)

    Hoeher, Bernhard; Voigtmann, Peter; Michelson, Georg; Schmauss, Bernhard

    2014-02-01

    We describe a method we call "stripe field imaging" that is capable of capturing wide field color fundus videos and images of the human eye at pupil sizes of 2mm. This means that it can be used with a non-dilated pupil even with bright ambient light. We realized a mobile demonstrator to prove the method and we could acquire color fundus videos of subjects successfully. We designed the demonstrator as a low-cost device consisting of mass market components to show that there is no major additional technical outlay to realize the improvements we propose. The technical core idea of our method is breaking the rotational symmetry in the optical design that is given in many conventional fundus cameras. By this measure we could extend the possible field of view (FOV) at a pupil size of 2mm from a circular field with 20° in diameter to a square field with 68° by 18° in size. We acquired a fundus video while the subject was slightly touching and releasing the lid. The resulting video showed changes at vessels in the region of the papilla and a change of the paleness of the papilla.

  18. Teacher training for using digital video camera in primary education

    Directory of Open Access Journals (Sweden)

    Pablo García Sempere

    2011-12-01

    Full Text Available This paper shows the partial results of a research carried out in primary schools, which evaluates the ability of teachers in the use of digital video camera. The study took place in the province of Granada, Spain. Our purpose was to know the level of knowledge, interest, difficulties and training needs so as to improve the teaching practice. The work has been done from a descriptive and ecletic approach. Quantitative (questionnaire and qualitative techniques (focus group have been used in this research. The information obtained shows that most of the teachers have a lack of knowledge in the use of video camera and digital edition. On the other hand, the majority agrees to include initial and permanent training on this subject. Finally, the most important conclusions are presented.

  19. video114_0402c -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  20. video114_0402b -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  1. Engineering task plan for Tanks 241-AN-103, 104, 105 color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1994-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and installation of the video camera systems into the vapor space within tanks 241-AN-103, 104, and 105. The one camera remotely operated color video systems will be used to observe and record the activities within the vapor space. Activities may include but are not limited to core sampling, auger activities, crust layer examination, monitoring of equipment installation/removal, and any other activities. The objective of this task is to provide a single camera system in each of the tanks for the Flammable Gas Tank Safety Program

  2. Euratom experience with video surveillance - Single camera and other non-multiplexed

    International Nuclear Information System (INIS)

    Otto, P.; Cozier, T.; Jargeac, B.; Castets, J.P.; Wagner, H.G.; Chare, P.; Roewer, V.

    1991-01-01

    The Euratom Safeguards Directorate (ESD) has been using a number of single camera video systems (Ministar, MIVS, DCS) and non-multiplexed multi-camera systems (Digiquad) for routine safeguards surveillance applications during the last four years. This paper describes aspects of system design and considerations relevant for installation. It reports on system reliability and performance and presents suggestions on future improvements

  3. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    Science.gov (United States)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  4. Advances in pediatric gastroenterology: introducing video camera capsule endoscopy.

    Science.gov (United States)

    Siaw, Emmanuel O

    2006-04-01

    The video camera capsule endoscope is a gastrointestinal endoscope approved by the U.S. Food and Drug Administration in 2001 for use in diagnosing gastrointestinal disorders in adults. In 2003, the agency approved the device for use in children ages 10 and older, and the endoscope is currently in use at Arkansas Children's Hospital. A capsule camera, lens, battery, transmitter and antenna together record images of the small intestine as the endoscope makes its way through the bowel. The instrument is used with minimal risk to the patient while offering a high degree of accuracy in diagnosing small intestine disorders.

  5. Identifying sports videos using replay, text, and camera motion features

    Science.gov (United States)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  6. Declarative camera control for automatic cinematography

    Energy Technology Data Exchange (ETDEWEB)

    Christianson, D.B.; Anderson, S.E.; Li-wei He [Univ. of Washington, Seattle, WA (United States)] [and others

    1996-12-31

    Animations generated by interactive 3D computer graphics applications are typically portrayed either from a particular character`s point of view or from a small set of strategically-placed viewpoints. By ignoring camera placement, such applications fail to realize important storytelling capabilities that have been explored by cinematographers for many years. In this paper, we describe several of the principles of cinematography and show how they can be formalized into a declarative language, called the Declarative Camera Control Language (DCCL). We describe the application of DCCL within the context of a simple interactive video game and argue that DCCL represents cinematic knowledge at the same level of abstraction as expert directors by encoding 16 idioms from a film textbook. These idioms produce compelling animations, as demonstrated on the accompanying videotape.

  7. A passive terahertz video camera based on lumped element kinetic inductance detectors

    Energy Technology Data Exchange (ETDEWEB)

    Rowe, Sam, E-mail: sam.rowe@astro.cf.ac.uk; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; House, Julian; Moseley, Paul; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian [Astronomy Instrumentation Group, School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Wood, Ken [QMC Instruments Ltd., School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Grainger, William [Rutherford Appleton Laboratory, STFC, Swindon SN2 1SZ (United Kingdom); Mauskopf, Philip [Astronomy Instrumentation Group, School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); School of Earth Science and Space Exploration, Arizona State University, Tempe, Arizona 85281 (United States); Spencer, Locke [Department of Physics and Astronomy, University of Lethbridge, Lethbridge, Alberta T1K 3M4 (Canada)

    2016-03-15

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  8. A passive terahertz video camera based on lumped element kinetic inductance detectors

    International Nuclear Information System (INIS)

    Rowe, Sam; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; House, Julian; Moseley, Paul; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian; Wood, Ken; Grainger, William; Mauskopf, Philip; Spencer, Locke

    2016-01-01

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  9. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  10. Real-time construction and visualisation of drift-free video mosaics from unconstrained camera motion

    Directory of Open Access Journals (Sweden)

    Mateusz Brzeszcz

    2015-08-01

    Full Text Available This work proposes a novel approach for real-time video mosaicking facilitating drift-free mosaic construction and visualisation, with integrated frame blending and redundancy management, that is shown to be flexible to a range of varying mosaic scenarios. The approach supports unconstrained camera motion with in-sequence loop closing, variation in camera focal distance (zoom and recovery from video sequence breaks. Real-time performance, over extended duration sequences, is realised via novel aspects of frame management within the mosaic representation and thus avoiding the high data redundancy associated with temporally dense, spatially overlapping video frame inputs. This managed set of image frames is visualised in real time using a dynamic mosaic representation of overlapping textured graphics primitives in place of the traditional globally constructed, and hence frequently reconstructed, mosaic image. Within this formulation, subsequent optimisation occurring during online construction can thus efficiency adjust relative frame positions via simple primitive position transforms. Effective visualisation is similarly facilitated by online inter-frame blending to overcome the illumination and colour variance associated with modern camera hardware. The evaluation illustrates overall robustness in video mosaic construction under a diverse range of conditions including indoor and outdoor environments, varying illumination and presence of in-scene motion on varying computational platforms.

  11. Camera Networks The Acquisition and Analysis of Videos over Wide Areas

    CERN Document Server

    Roy-Chowdhury, Amit K

    2012-01-01

    As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide

  12. Virtual georeferencing : how an 11-eyed video camera helps the pipeline industry

    Energy Technology Data Exchange (ETDEWEB)

    Ball, C.

    2006-08-15

    Designed by Immersive Media Corporation (IMC) the new Telemmersion System is a lightweight camera system capable of generating synchronized high-resolution video streams that represent a full motion spherical world. With 11 cameras configured in a sphere, the system is portable and can be easily mounted on ground and air-borne vehicles for use in surveillance; integration; commanded control of interactive intelligent databases; scenario modelling; and specialized training. The system was recently used to georeference immersive video of existing and proposed pipeline routes. Footage from the system was used to supplement traditional data collection methods for environmental impact assessment; route selection and engineering; regulatory compliance; and public consultation. Traditionally, the medium used to visualize pipeline routes is a map overlaid with aerial photography. Pipeline routes are typically flown throughout the planning stages in order to give environmentalists, engineers and other personnel the opportunity to visually inspect the terrain, identify issues and make decisions. The technology has significantly reduced the costs during the planning stages of pipeline routes, as the remote footage can be stored on DVD, allowing various stakeholders and contractors to view the terrain and zoom in on particular land features. During one recent 3-day trip, 500 km of proposed pipeline was recorded using the technology. Typically, for various regulatory and environmental requirements, a half a dozen trips would have been required. 2 figs.

  13. A video authentication technique

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1987-01-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as line supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system

  14. Video Comparator

    International Nuclear Information System (INIS)

    Rose, R.P.

    1978-01-01

    The Video Comparator is a comparative gage that uses electronic images from two sources, a standard and an unknown. Two matched video cameras are used to obtain the electronic images. The video signals are mixed and displayed on a single video receiver (CRT). The video system is manufactured by ITP of Chatsworth, CA and is a Tele-Microscope II, Model 148. One of the cameras is mounted on a toolmaker's microscope stand and produces a 250X image of a cast. The other camera is mounted on a stand and produces an image of a 250X template. The two video images are mixed in a control box provided by ITP and displayed on a CRT. The template or the cast can be moved to align the desired features. Vertical reference lines are provided on the CRT, and a feature on the cast can be aligned with a line on the CRT screen. The stage containing the casts can be moved using a Boeckleler micrometer equipped with a digital readout, and a second feature aligned with the reference line and the distance moved obtained from the digital display

  15. Optimization of radiation sensors for a passive terahertz video camera for security applications

    NARCIS (Netherlands)

    Zieger, G.J.M.

    2014-01-01

    A passive terahertz video camera allows for fast security screenings from distances of several meters. It avoids irradiation or the impressions of nakedness, which oftentimes cause embarrassment and trepidation of the concerned persons. This work describes the optimization of highly sensitive

  16. Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system.

    Science.gov (United States)

    Shieh, Wann-Yun; Huang, Ju-Chin

    2012-09-01

    For most elderly, unpredictable falling incidents may occur at the corner of stairs or a long corridor due to body frailty. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may occur. Traditional secure or video surveillance systems need caregivers to monitor a centralized screen continuously, or need an elder to wear sensors to detect falling incidents, which explicitly waste much human power or cause inconvenience for elders. In this paper, we propose an automatic falling-detection algorithm and implement this algorithm in a multi-camera video surveillance system. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling-pattern recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to someone needs to be noticed. The algorithm has been implemented in a DSP-based hardware acceleration board for functionality proof. Simulation results show that the accuracy of falling detection can achieve at least 90% and the throughput of a four-camera surveillance system can be improved by about 2.1 times. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    Directory of Open Access Journals (Sweden)

    Akshay Gopinathan Nair

    2015-01-01

    Full Text Available Objective: To study the utility of a commercially available small, portable ultra-high definition (HD camera (GoPro Hero 4 for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon′s head. Due care was taken to protect the patient′s identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each. The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon′s head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  18. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room.

    Science.gov (United States)

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana

    2015-10-01

    To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  19. Design of IP Camera Access Control Protocol by Utilizing Hierarchical Group Key

    Directory of Open Access Journals (Sweden)

    Jungho Kang

    2015-08-01

    Full Text Available Unlike CCTV, security video surveillance devices, which we have generally known about, IP cameras which are connected to a network either with or without wire, provide monitoring services through a built-in web-server. Due to the fact that IP cameras can use a network such as the Internet, multiple IP cameras can be installed at a long distance and each IP camera can utilize the function of a web server individually. Even though IP cameras have this kind of advantage, it has difficulties in access control management and weakness in user certification, too. Particularly, because the market of IP cameras did not begin to be realized a long while ago, systems which are systematized from the perspective of security have not been built up yet. Additionally, it contains severe weaknesses in terms of access authority to the IP camera web server, certification of users, and certification of IP cameras which are newly installed within a network, etc. This research grouped IP cameras hierarchically to manage them systematically, and provided access control and data confidentiality between groups by utilizing group keys. In addition, IP cameras and users are certified by using PKI-based certification, and weak points of security such as confidentiality and integrity, etc., are improved by encrypting passwords. Thus, this research presents specific protocols of the entire process and proved through experiments that this method can be actually applied.

  20. Live video monitoring robot controlled by web over internet

    Science.gov (United States)

    Lokanath, M.; Akhil Sai, Guruju

    2017-11-01

    Future is all about robots, robot can perform tasks where humans cannot, Robots have huge applications in military and industrial area for lifting heavy weights, for accurate placements, for repeating the same task number of times, where human are not efficient. Generally robot is a mix of electronic, electrical and mechanical engineering and can do the tasks automatically on its own or under the supervision of humans. The camera is the eye for robot, call as robovision helps in monitoring security system and also can reach into the places where the human eye cannot reach. This paper presents about developing a live video streaming robot controlled from the website. We designed the web, controlling for the robot to move left, right, front and back while streaming video. As we move to the smart environment or IoT (Internet of Things) by smart devices the system we developed here connects over the internet and can be operated with smart mobile phone using a web browser. The Raspberry Pi model B chip acts as heart for this system robot, the sufficient motors, surveillance camera R pi 2 are connected to Raspberry pi.

  1. Utilization of an video camera in study of the goshawk (Accipiter gentilis diet

    Directory of Open Access Journals (Sweden)

    Martin Tomešek

    2011-01-01

    Full Text Available In 2009, research was carried out into the food spectrum of goshawk (Accipiter gentilis by means of automatic digital video cameras with a recoding device in the area of the Chřiby Upland. The monitoring took place at two localities in the vicinity of the village of Buchlovice at the southeastern edge of the Chřiby Upland in a period from hatching the chicks to their flying out from a nest. The unambiguous advantage of using the camera systems at the study of food spectrum is a possibility of the exact determination of brought preys in the majority of cases. As much as possible economic and effective technology prepared according to given conditions was used. Results of using automatic digital video cameras with a recoding device consist in a number of valuable data, which clarify the food spectrum of a given species. The main output of the whole project is determination of the food spectrum of goshawk (Accipiter gentilis from two localities, which showed the following composition: 89 % birds, 9.5 % mammals and 1.5 % other animals or unidentifiable components of food. Birds of the genus Turdus were the most frequent prey in both cases of monitoring. As for mammals, Sciurus vulgaris was most frequent.

  2. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    Energy Technology Data Exchange (ETDEWEB)

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  3. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    International Nuclear Information System (INIS)

    Strehlow, J.P.

    1994-01-01

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE' s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1)

  4. Underwater video enhancement using multi-camera super-resolution

    Science.gov (United States)

    Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.

    2017-12-01

    Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.

  5. Video control system for a drilling in furniture workpiece

    Science.gov (United States)

    Khmelev, V. L.; Satarov, R. N.; Zavyalova, K. V.

    2018-05-01

    During last 5 years, Russian industry has being starting to be a robotic, therefore scientific groups got new tasks. One of new tasks is machine vision systems, which should solve problem of automatic quality control. This type of systems has a cost of several thousand dollars each. The price is impossible for regional small business. In this article, we describe principle and algorithm of cheap video control system, which one uses web-cameras and notebook or desktop computer as a computing unit.

  6. Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video content analysis tasks in large-scale ad-hoc networks

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.

    2017-10-01

    Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.

  7. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  8. The multi-camera optical surveillance system (MOS)

    International Nuclear Information System (INIS)

    Otto, P.; Wagner, H.; Richter, B.; Gaertner, K.J.; Laszlo, G.; Neumann, G.

    1991-01-01

    The transition from film camera to video surveillance systems, in particular the implementation of high capacity multi-camera video systems, results in a large increase in the amount of recorded scenes. Consequently, there is a substantial increase in the manpower requirements for review. Moreover, modern microprocessor controlled equipment facilitates the collection of additional data associated with each scene. Both the scene and the annotated information have to be evaluated by the inspector. The design of video surveillance systems for safeguards necessarily has to account for both appropriate recording and reviewing techniques. An aspect of principal importance is that the video information is stored on tape. Under the German Support Programme to the Agency a technical concept has been developed which aims at optimizing the capabilities of a multi-camera optical surveillance (MOS) system including the reviewing technique. This concept is presented in the following paper including a discussion of reviewing and reliability

  9. Direct measurement of erythrocyte deformability in diabetes mellitus with a transparent microchannel capillary model and high-speed video camera system.

    Science.gov (United States)

    Tsukada, K; Sekizuka, E; Oshio, C; Minamitani, H

    2001-05-01

    To measure erythrocyte deformability in vitro, we made transparent microchannels on a crystal substrate as a capillary model. We observed axisymmetrically deformed erythrocytes and defined a deformation index directly from individual flowing erythrocytes. By appropriate choice of channel width and erythrocyte velocity, we could observe erythrocytes deforming to a parachute-like shape similar to that occurring in capillaries. The flowing erythrocytes magnified 200-fold through microscopy were recorded with an image-intensified high-speed video camera system. The sensitivity of deformability measurement was confirmed by comparing the deformation index in healthy controls with erythrocytes whose membranes were hardened by glutaraldehyde. We confirmed that the crystal microchannel system is a valuable tool for erythrocyte deformability measurement. Microangiopathy is a characteristic complication of diabetes mellitus. A decrease in erythrocyte deformability may be part of the cause of this complication. In order to identify the difference in erythrocyte deformability between control and diabetic erythrocytes, we measured erythrocyte deformability using transparent crystal microchannels and a high-speed video camera system. The deformability of diabetic erythrocytes was indeed measurably lower than that of erythrocytes in healthy controls. This result suggests that impaired deformability in diabetic erythrocytes can cause altered viscosity and increase the shear stress on the microvessel wall. Copyright 2001 Academic Press.

  10. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    International Nuclear Information System (INIS)

    Anderson, Robert J.

    2014-01-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  11. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  12. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Robotic and Security Systems Dept.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  13. VideoWeb Dataset for Multi-camera Activities and Non-verbal Communication

    Science.gov (United States)

    Denina, Giovanni; Bhanu, Bir; Nguyen, Hoang Thanh; Ding, Chong; Kamal, Ahmed; Ravishankar, Chinya; Roy-Chowdhury, Amit; Ivers, Allen; Varda, Brenda

    Human-activity recognition is one of the most challenging problems in computer vision. Researchers from around the world have tried to solve this problem and have come a long way in recognizing simple motions and atomic activities. As the computer vision community heads toward fully recognizing human activities, a challenging and labeled dataset is needed. To respond to that need, we collected a dataset of realistic scenarios in a multi-camera network environment (VideoWeb) involving multiple persons performing dozens of different repetitive and non-repetitive activities. This chapter describes the details of the dataset. We believe that this VideoWeb Activities dataset is unique and it is one of the most challenging datasets available today. The dataset is publicly available online at http://vwdata.ee.ucr.edu/ along with the data annotation.

  14. vid116_0501n -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  15. vid116_0501s -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  16. vid116_0501c -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  17. vid116_0501d -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  18. Hydrogen peroxide plasma sterilization of a waterproof, high-definition video camera case for intraoperative imaging in veterinary surgery.

    Science.gov (United States)

    Adin, Christopher A; Royal, Kenneth D; Moore, Brandon; Jacob, Megan

    2018-06-13

    To evaluate the safety and usability of a wearable, waterproof high-definition camera/case for acquisition of surgical images by sterile personnel. An in vitro study to test the efficacy of biodecontamination of camera cases. Usability for intraoperative image acquisition was assessed in clinical procedures. Two waterproof GoPro Hero4 Silver camera cases were inoculated by immersion in media containing Staphylococcus pseudointermedius or Escherichia coli at ≥5.50E+07 colony forming units/mL. Cases were biodecontaminated by manual washing and hydrogen peroxide plasma sterilization. Cultures were obtained by swab and by immersion in enrichment broth before and after each contamination/decontamination cycle (n = 4). The cameras were then applied by a surgeon in clinical procedures by using either a headband or handheld mode and were assessed for usability according to 5 user characteristics. Cultures of all poststerilization swabs were negative. One of 8 cultures was positive in enrichment broth, consistent with a low level of contamination in 1 sample. Usability of the camera was considered poor in headband mode, with limited battery life, inability to control camera functions, and lack of zoom function affecting image quality. Handheld operation of the camera by the primary surgeon improved usability, allowing close-up still and video intraoperative image acquisition. Vaporized hydrogen peroxide sterilization of this camera case was considered effective for biodecontamination. Handheld operation improved usability for intraoperative image acquisition. Vaporized hydrogen peroxide sterilization and thorough manual washing of a waterproof camera may provide cost effective intraoperative image acquisition for documentation purposes. © 2018 The American College of Veterinary Surgeons.

  19. Visual fatigue modeling for stereoscopic video shot based on camera motion

    Science.gov (United States)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  20. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  1. User interface using a 3D model for video surveillance

    Science.gov (United States)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  2. Image-scanning measurement using video dissection cameras

    International Nuclear Information System (INIS)

    Carson, J.S.

    1978-01-01

    A high speed dimensional measuring system capable of scanning a thin film network, and determining if there are conductor widths, resistor widths, or spaces not typical of the design for this product is described. The eye of the system is a conventional TV camera, although such devices as image dissector cameras or solid-state scanners may be used more often in the future. The analog signal from the TV camera is digitized for processing by the computer and is presented to the TV monitor to assist the operator in monitoring the system's operation. Movable stages are required when the field of view of the scanner is less than the size of the object. A minicomputer controls the movement of the stage, and communicates with the digitizer to select picture points that are to be processed. Communications with the system are maintained through a teletype or CRT terminal

  3. Flip Video for Dummies

    CERN Document Server

    Hutsko, Joe

    2010-01-01

    The full-color guide to shooting great video with the Flip Video camera. The inexpensive Flip Video camera is currently one of the hottest must-have gadgets. It's portable and connects easily to any computer to transfer video you shoot onto your PC or Mac. Although the Flip Video camera comes with a quick-start guide, it lacks a how-to manual, and this full-color book fills that void! Packed with full-color screen shots throughout, Flip Video For Dummies shows you how to shoot the best possible footage in a variety of situations. You'll learn how to transfer video to your computer and then edi

  4. A multiframe soft x-ray camera with fast video capture for the LSX field reversed configuration (FRC) experiment

    International Nuclear Information System (INIS)

    Crawford, E.A.

    1992-01-01

    Soft x-ray pinhole imaging has proven to be an exceptionally useful diagnostic for qualitative observation of impurity radiation from field reversed configuration plasmas. We used a four frame device, similar in design to those discussed in an earlier paper [E. A. Crawford, D. P. Taggart, and A. D. Bailey III, Rev. Sci. Instrum. 61, 2795 (1990)] as a routine diagnostic during the last six months of the Large s Experiment (LSX) program. Our camera is an improvement over earlier implementations in several significant aspects. It was designed and used from the onset of the LSX experiments with a video frame capture system so that an instant visual record of the shot was available to the machine operator as well as facilitating quantitative interpretation of intensity information recorded in the images. The camera was installed in the end region of the LSX on axis approximately 5.5 m from the plasma midplane. Experience with bolometers on LSX showed serious problems with ''particle dumps'' at the axial location at various times during the plasma discharge. Therefore, the initial implementation of the camera included an effective magnetic sweeper assembly. Overall performance of the camera, video capture system, and sweeper is discussed

  5. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...... camera control in games is discussed....

  6. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    Science.gov (United States)

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  7. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  8. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    Science.gov (United States)

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  9. Development of a camera casing suited for cryogenic and vacuum applications

    Science.gov (United States)

    Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.

    2013-12-01

    We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

  10. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    Science.gov (United States)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  11. Algorithms for the automatic identification of MARFEs and UFOs in JET database of visible camera videos

    International Nuclear Information System (INIS)

    Murari, A.; Camplani, M.; Cannas, B.; Usai, P.; Mazon, D.; Delaunay, F.

    2010-01-01

    MARFE instabilities and UFOs leave clear signatures in JET fast visible camera videos. Given the potential harmful consequences of these events, particularly as triggers of disruptions, it would be important to have the means of detecting them automatically. In this paper, the results of various algorithms to identify automatically the MARFEs and UFOs in JET visible videos are reported. The objective is to retrieve the videos, which have captured these events, exploring the whole JET database of images, as a preliminary step to the development of real-time identifiers in the future. For the detection of MARFEs, a complete identifier has been finalized, using morphological operators and Hu moments. The final algorithm manages to identify the videos with MARFEs with a success rate exceeding 80%. Due to the lack of a complete statistics of examples, the UFO identifier is less developed, but a preliminary code can detect UFOs quite reliably. (authors)

  12. Frequency identification of vibration signals using video camera image data.

    Science.gov (United States)

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-10-16

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  13. Frequency Identification of Vibration Signals Using Video Camera Image Data

    Directory of Open Access Journals (Sweden)

    Chia-Hung Wu

    2012-10-01

    Full Text Available This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  14. Head-coupled remote stereoscopic camera system for telepresence applications

    Science.gov (United States)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  15. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  16. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    Science.gov (United States)

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  17. A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities

    Directory of Open Access Journals (Sweden)

    Antonio Sánchez-Esguevillas

    2012-08-01

    Full Text Available This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  18. Light-reflection random-target method for measurement of the modulation transfer function of a digital video-camera

    Science.gov (United States)

    Pospisil, J.; Jakubik, P.; Machala, L.

    2005-11-01

    This article reports the suggestion, realization and verification of the newly developed measuring means of the noiseless and locally shift-invariant modulation transfer function (MTF) of a digital video camera in a usual incoherent visible region of optical intensity, especially of its combined imaging, detection, sampling and digitizing steps which are influenced by the additive and spatially discrete photodetector, aliasing and quantization noises. Such means relates to the still camera automatic working regime and static two-dimensional spatially continuous light-reflection random target of white-noise property. The introduced theoretical reason for such a random-target method is also performed under exploitation of the proposed simulation model of the linear optical intensity response and possibility to express the resultant MTF by a normalized and smoothed rate of the ascertainable output and input power spectral densities. The random-target and resultant image-data were obtained and processed by means of a processing and evaluational PC with computation programs developed on the basis of MATLAB 6.5E The present examples of results and other obtained results of the performed measurements demonstrate the sufficient repeatability and acceptability of the described method for comparative evaluations of the performance of digital video cameras under various conditions.

  19. Development of Automated Tracking System with Active Cameras for Figure Skating

    Science.gov (United States)

    Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi

    This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.

  20. CRED Fish Observations from Stereo Video Cameras on a SeaBED AUV collected around Tutuila, American Samoa in 2012

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Black and white imagery were collected using a stereo pair of underwater video cameras mounted on a SeaBED autonomous underwater vehicle (AUV) and deployed around...

  1. Light-reflection random-target method for measurement of the modulation transfer function of a digital video-camera

    Czech Academy of Sciences Publication Activity Database

    Pospíšil, Jaroslav; Jakubík, P.; Machala, L.

    2005-01-01

    Roč. 116, - (2005), s. 573-585 ISSN 0030-4026 Institutional research plan: CEZ:AV0Z10100522 Keywords : random-target measuring method * light-reflection white - noise target * digital video camera * modulation transfer function * power spectral density Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.395, year: 2005

  2. A lateral chromatic aberration correction system for ultrahigh-definition color video camera

    Science.gov (United States)

    Yamashita, Takayuki; Shimamoto, Hiroshi; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed color camera for an 8k x 4k-pixel ultrahigh-definition video system, which is called Super Hi- Vision, with a 5x zoom lens and a signal-processing system incorporating a function for real-time lateral chromatic aberration correction. The chromatic aberration of the lens degrades color image resolution. So in order to develop a compact zoom lens consistent with ultrahigh-resolution characteristics, we incorporated a real-time correction function in the signal-processing system. The signal-processing system has eight memory tables to store the correction data at eight focal length points on the blue and red channels. When the focal length data is inputted from the lens control units, the relevant correction data are interpolated from two of eights correction data tables. This system performs geometrical conversion on both channels using this correction data. This paper describes that the correction function can successfully reduce the lateral chromatic aberration, to an amount small enough to ensure the desired image resolution was achieved over the entire range of the lens in real time.

  3. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    Science.gov (United States)

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  4. Acceptance/Operational Test Report for Tank 241-AN-104 camera and camera purge control system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1995-11-01

    This Acceptance/Operational Test Procedure (ATP/OTP) will document the satisfactory operation of the camera purge panel, purge control panel, color camera system and associated control components destined for installation. The final acceptance of the complete system will be performed in the field. The purge panel and purge control panel will be tested for its safety interlock which shuts down the camera and pan-and-tilt inside the tank vapor space during loss of purge pressure and that the correct purge volume exchanges are performed as required by NFPA 496. This procedure is separated into seven sections. This Acceptance/Operational Test Report documents the successful acceptance and operability testing of the 241-AN-104 camera system and camera purge control system

  5. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  6. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    Science.gov (United States)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  7. A Method for Estimating Surveillance Video Georeferences

    Directory of Open Access Journals (Sweden)

    Aleksandar Milosavljević

    2017-07-01

    Full Text Available The integration of a surveillance camera video with a three-dimensional (3D geographic information system (GIS requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM of the area of interest. Once an adequate number of points are matched, Levenberg–Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.

  8. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  9. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  10. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  11. LAMOST CCD camera-control system based on RTS2

    Science.gov (United States)

    Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng

    2018-05-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.

  12. Video auto stitching in multicamera surveillance system

    Science.gov (United States)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  13. Low Cost Wireless Network Camera Sensors for Traffic Monitoring

    Science.gov (United States)

    2012-07-01

    Many freeways and arterials in major cities in Texas are presently equipped with video detection cameras to : collect data and help in traffic/incident management. In this study, carefully controlled experiments determined : the throughput and output...

  14. Video flow active control by means of adaptive shifted foveal geometries

    Science.gov (United States)

    Urdiales, Cristina; Rodriguez, Juan A.; Bandera, Antonio J.; Sandoval, Francisco

    2000-10-01

    This paper presents a control mechanism for video transmission that relies on transmitting non-uniform resolution images depending on the delay of the communication channel. These images are built in an active way to keep the areas of interest of the image at the highest resolution available. In order to shift the area of high resolution over the image and to achieve a data structure easy to process by using conventional algorithms, a shifted fovea multi resolution geometry of adaptive size is used. Besides, if delays are nevertheless too high, the different areas of resolution of the image can be transmitted at different rates. A functional system has been developed for corridor surveillance with static cameras. Tests with real video images have proven that the method allows an almost constant rate of images per second as long as the channel is not collapsed.

  15. System design description for the LDUA high resolution stereoscopic video camera system (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS), system 6230, was designed to be used as an end effector on the LDUA to perform surveillance and inspection activities within a waste tank. It is attached to the LDUA by means of a Tool Interface Plate (TIP) which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate. Designed to perform up close weld and corrosion inspection roles in US T operations, the HRSVS will support and supplement the Light Duty Utility Arm (LDUA) and provide the crucial inspection tasks needed to ascertain waste tank condition

  16. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  17. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  18. Linear array of photodiodes to track a human speaker for video recording

    International Nuclear Information System (INIS)

    DeTone, D; Neal, H; Lougheed, R

    2012-01-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant– the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting–a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  19. Linear array of photodiodes to track a human speaker for video recording

    Science.gov (United States)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  20. CamOn: A Real-Time Autonomous Camera Control System

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    This demonstration presents CamOn, an autonomous cam- era control system for real-time 3D games. CamOn employs multiple Artificial Potential Fields (APFs), a robot motion planning technique, to control both the location and orienta- tion of the camera. Scene geometry from the 3D environment...... contributes to the potential field that is used to determine po- sition and movement of the camera. Composition constraints for the camera are modelled as potential fields for controlling the view target of the camera. CamOn combines the compositional benefits of constraint- based camera systems, and improves...

  1. Video Liveness for Citizen Journalism: Attacks and Defenses

    OpenAIRE

    Rahman, Mahmudur; Azimpourkivi, Mozhgan; Topkara, Umut; Carbunar, Bogdan

    2017-01-01

    The impact of citizen journalism raises important video integrity and credibility issues. In this article, we introduce Vamos, the first user transparent video "liveness" verification solution based on video motion, that accommodates the full range of camera movements, and supports videos of arbitrary length. Vamos uses the agreement between video motion and camera movement to corroborate the video authenticity. Vamos can be integrated into any mobile video capture application without requiri...

  2. Combining local and global optimisation for virtual camera control

    OpenAIRE

    Burelli, Paolo; Yannakakis, Georgios N.; 2010 IEEE Symposium on Computational Intelligence and Games

    2010-01-01

    Controlling a virtual camera in 3D computer games is a complex task. The camera is required to react to dynamically changing environments and produce high quality visual results and smooth animations. This paper proposes an approach that combines local and global search to solve the virtual camera control problem. The automatic camera control problem is described and it is decomposed into sub-problems; then a hierarchical architecture that solves each sub-problem using the most appropriate op...

  3. MAVIS: Mobile Acquisition and VISualization -\\ud a professional tool for video recording on a mobile platform

    OpenAIRE

    Watten, Phil; Gilardi, Marco; Holroyd, Patrick; Newbury, Paul

    2015-01-01

    Professional video recording is a complex process which often requires expensive cameras and large amounts of ancillary equipment.\\ud With the advancement of mobile technologies, cameras on mobile devices have improved to the point where the quality of their output is sometimes comparable to that obtained from a professional video camera and are often used in professional productions.\\ud However, tools that allow professional users to access the information they need to control the technical ...

  4. NV-CMOS HD camera for day/night imaging

    Science.gov (United States)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  5. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  6. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  7. Calibration of Action Cameras for Photogrammetric Purposes

    Directory of Open Access Journals (Sweden)

    Caterina Balletti

    2014-09-01

    Full Text Available The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a easy to handle, (b capable of performing under extreme conditions and more importantly (c able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  8. Microprocessor-controlled wide-range streak camera

    Science.gov (United States)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  9. Microprocessor-controlled, wide-range streak camera

    International Nuclear Information System (INIS)

    Amy E. Lewis; Craig Hollabaugh

    2006-01-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized

  10. Rate control scheme for consistent video quality in scalable video codec.

    Science.gov (United States)

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  11. Authentication Approaches for Standoff Video Surveillance

    International Nuclear Information System (INIS)

    Baldwin, G.; Sweatt, W.; Thomas, M.

    2015-01-01

    Video surveillance for international nuclear safeguards applications requires authentication, which confirms to an inspector reviewing the surveillance images that both the source and the integrity of those images can be trusted. To date, all such authentication approaches originate at the camera. Camera authentication would not suffice for a ''standoff video'' application, where the surveillance camera views an image piped to it from a distant objective lens. Standoff video might be desired in situations where it does not make sense to expose sensitive and costly camera electronics to contamination, radiation, water immersion, or other adverse environments typical of hot cells, reprocessing facilities, and within spent fuel pools, for example. In this paper, we offer optical architectures that introduce a standoff distance of several metres between the scene and camera. Several schemes enable one to authenticate not only that the extended optical path is secure, but also that the scene is being viewed live. They employ optical components with remotely-operated spectral, temporal, directional, and intensity properties that are under the control of the inspector. If permitted by the facility operator, illuminators, reflectors and polarizers placed in the scene offer further possibilities. Any tampering that would insert an alternative image source for the camera, although undetectable with conventional cryptographic authentication of digital camera data, is easily exposed using the approaches we describe. Sandia National Laboratories is a multi-programme laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. Support to Sandia National Laboratories provided by the NNSA Next Generation Safeguards Initiative is gratefully acknowledged. SAND2014-3196 A. (author)

  12. Guerrilla Video: A New Protocol for Producing Classroom Video

    Science.gov (United States)

    Fadde, Peter; Rich, Peter

    2010-01-01

    Contemporary changes in pedagogy point to the need for a higher level of video production value in most classroom video, replacing the default video protocol of an unattended camera in the back of the classroom. The rich and complex environment of today's classroom can be captured more fully using the higher level, but still easily manageable,…

  13. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    International Nuclear Information System (INIS)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-01-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination

  14. A television/still camera with common optical system for reactor inspection

    International Nuclear Information System (INIS)

    Hughes, G.; McBane, P.

    1976-01-01

    One of the problems of reactor inspection is to obtain permanent high quality records. Video recordings provide a record of poor quality but known content. Still cameras can be used but the frame content is not predictable. Efforts have been made to combine T.V. viewing to align a still camera but a simple combination does not provide the same frame size. The necessity to preset the still camera controls severely restricts the flexibility of operation. A camera has, therefore, been designed which allows a search operation using the T.V. system. When an anomaly is found the still camera controls can be remotely set, an exact record obtained and the search operation continued without removal from the reactor. An application of this camera in the environment of the blanket gas region above the sodium region in PFR at 150 0 C is described

  15. Video Toroid Cavity Imager

    Energy Technology Data Exchange (ETDEWEB)

    Gerald, Rex E. II; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  16. Patterned Video Sensors For Low Vision

    Science.gov (United States)

    Juday, Richard D.

    1996-01-01

    Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.

  17. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  18. Human recognition in a video network

    Science.gov (United States)

    Bhanu, Bir

    2009-10-01

    Video networks is an emerging interdisciplinary field with significant and exciting scientific and technological challenges. It has great promise in solving many real-world problems and enabling a broad range of applications, including smart homes, video surveillance, environment and traffic monitoring, elderly care, intelligent environments, and entertainment in public and private spaces. This paper provides an overview of the design of a wireless video network as an experimental environment, camera selection, hand-off and control, anomaly detection. It addresses challenging questions for individual identification using gait and face at a distance and present new techniques and their comparison for robust identification.

  19. Microprocessor-controlled, wide-range streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Amy E. Lewis, Craig Hollabaugh

    2006-09-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  20. High-performance dual-speed CCD camera system for scientific imaging

    Science.gov (United States)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  1. Automatic video segmentation employing object/camera modeling techniques

    NARCIS (Netherlands)

    Farin, D.S.

    2005-01-01

    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not

  2. Hardware accelerator design for tracking in smart camera

    Science.gov (United States)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  3. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    Directory of Open Access Journals (Sweden)

    Michiaki Inoue

    2017-10-01

    Full Text Available This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time.

  4. Playing Action Video Games Improves Visuomotor Control.

    Science.gov (United States)

    Li, Li; Chen, Rongrong; Chen, Jing

    2016-08-01

    Can playing action video games improve visuomotor control? If so, can these games be used in training people to perform daily visuomotor-control tasks, such as driving? We found that action gamers have better lane-keeping and visuomotor-control skills than do non-action gamers. We then trained non-action gamers with action or nonaction video games. After they played a driving or first-person-shooter video game for 5 or 10 hr, their visuomotor control improved significantly. In contrast, non-action gamers showed no such improvement after they played a nonaction video game. Our model-driven analysis revealed that although different action video games have different effects on the sensorimotor system underlying visuomotor control, action gaming in general improves the responsiveness of the sensorimotor system to input error signals. The findings support a causal link between action gaming (for as little as 5 hr) and enhancement in visuomotor control, and suggest that action video games can be beneficial training tools for driving. © The Author(s) 2016.

  5. Methods and Algorithms for Detecting Objects in Video Files

    Directory of Open Access Journals (Sweden)

    Nguyen The Cuong

    2018-01-01

    Full Text Available Video files are files that store motion pictures and sounds like in real life. In today's world, the need for automated processing of information in video files is increasing. Automated processing of information has a wide range of application including office/home surveillance cameras, traffic control, sports applications, remote object detection, and others. In particular, detection and tracking of object movement in video file plays an important role. This article describes the methods of detecting objects in video files. Today, this problem in the field of computer vision is being studied worldwide.

  6. Advanced real-time manipulation of video streams

    CERN Document Server

    Herling, Jan

    2014-01-01

    Diminished Reality is a new fascinating technology that removes real-world content from live video streams. This sensational live video manipulation actually removes real objects and generates a coherent video stream in real-time. Viewers cannot detect modified content. Existing approaches are restricted to moving objects and static or almost static cameras and do not allow real-time manipulation of video content. Jan Herling presents a new and innovative approach for real-time object removal with arbitrary camera movements.

  7. An integrated port camera and display system for laparoscopy.

    Science.gov (United States)

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  8. EDICAM fast video diagnostic installation on the COMPASS tokamak

    International Nuclear Information System (INIS)

    Szappanos, A.; Berta, M.; Hron, M.; Panek, R.; Stoeckel, J.; Tulipan, S.; Veres, G.; Weinzettl, V.; Zoletnik, S.

    2010-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed by the Hungarian Association and has been installed on the COMPASS tokamak in the Institute of Plasma Physics AS CR in Prague, during February 2009. The standalone system contains a data acquisition PC and a prototype sensor module of EDICAM. Appropriate optical system have been designed and adjusted for the local requirements, and a mechanical holder keeps the camera out of the magnetic field. The fast camera contains a monochrome CMOS sensor with advanced control features and spectral sensitivity in the visible range. A special web based control interface has been implemented using Java spring framework to provide the control features in a graphical user environment. Java native interface (JNI) is used to reach the driver functions and to collect the data stored by direct memory access (DMA). Using a built in real-time streaming server one can see the live video from the camera through any web browser in the intranet. The live video is distributed in a Motion Jpeg format using real-time streaming protocol (RTSP) and a Java applet have been written to show the movie on the client side. The control system contains basic image processing features and the 3D wireframe of the tokamak can be projected to the selected frames. A MatLab interface is also presented with advanced post processing and analysis features to make the raw data available for high level computing programs. In this contribution all the concepts of EDICAM control center and the functions of the distinct software modules are described.

  9. Automatic mashup generation of multiple-camera videos

    NARCIS (Netherlands)

    Shrestha, P.

    2009-01-01

    The amount of user generated video content is growing enormously with the increase in availability and affordability of technologies for video capturing (e.g. camcorders, mobile-phones), storing (e.g. magnetic and optical devices, online storage services), and sharing (e.g. broadband internet,

  10. Control system for several rotating mirror camera synchronization operation

    Science.gov (United States)

    Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji

    1997-05-01

    This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.

  11. Real-Time FPGA-Based Object Tracker with Automatic Pan-Tilt Features for Smart Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-05-01

    Full Text Available The design of smart video surveillance systems is an active research field among the computer vision community because of their ability to perform automatic scene analysis by selecting and tracking the objects of interest. In this paper, we present the design and implementation of an FPGA-based standalone working prototype system for real-time tracking of an object of interest in live video streams for such systems. In addition to real-time tracking of the object of interest, the implemented system is also capable of providing purposive automatic camera movement (pan-tilt in the direction determined by movement of the tracked object. The complete system, including camera interface, DDR2 external memory interface controller, designed object tracking VLSI architecture, camera movement controller and display interface, has been implemented on the Xilinx ML510 (Virtex-5 FX130T FPGA Board. Our proposed, designed and implemented system robustly tracks the target object present in the scene in real time for standard PAL (720 × 576 resolution color video and automatically controls camera movement in the direction determined by the movement of the tracked object.

  12. Automated safety control by video cameras

    NARCIS (Netherlands)

    Lefter, I.; Rothkrantz, L.; Somhorst, M.

    2012-01-01

    At this moment many surveillance systems are installed in public domains to control the safety of people and properties. They are constantly watched by human operators who are easily overloaded. To support the human operators, a surveillance system model is designed that detects suspicious behaviour

  13. Video digitizer (real time-frame grabber) with region of interest suitable for quantitative data analysis used on the infrared and H alpha cameras installed on the DIII-D experiment

    International Nuclear Information System (INIS)

    Ferguson, S.W.; Kevan, D.K.; Hill, D.N.; Allen, S.L.

    1987-01-01

    This paper describes a CAMAC based video digitizer with region of interest (ROI) capability that was designed for use with the infrared and H alpha cameras installed by Lawrence Livermore Laboratory on the DIII-D experiment at G.A. Technologies in San Diego, California. The video digitizer uses a custom built CAMAC video synchronizer module to clock data into a CAMAC transient recorder on a line-by-line basis starting at the beginning of a field. The number of fields that are recorded is limited only by the available transient recorder memory. In order to conserve memory, the CAMAC video synchronizer module provides for the alternative selection of a specific region of interest in each successive field to be recorded. Memory conservation can be optimized by specifying lines in the field, start time, stop time, and the number of data samples per line. This video frame grabber has proved versatile for capturing video in such diverse applications as recording video fields from a video tape recorder played in slow motion or recording video fields in real time during a DIII-D shot. In other cases, one or more lines of video are recorded per frame to give a cross sectional slice of the plasma. Since all the data in the digitizer memory is synchronized to video fields and lines, the data can be read directly into the control computer in the proper matrix format to facilitate rapid processing, display, and permanent storage

  14. IndigoVision IP video keeps watch over remote gas facilities in Amazon rainforest

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    2010-07-15

    In Brazil, IndigoVision's complete IP video security technology is being used to remotely monitor automated gas facilities in the Amazon rainforest. Twelve compounds containing millions of dollars of process automation, telemetry, and telecom equipment are spread across many thousands of miles of forest and centrally monitored in Rio de Janeiro using Control Center, the company's Security Management software. The security surveillance project uses a hybrid IP network comprising satellite, fibre optic, and wireless links. In addition to advanced compression technology and bandwidth tuning tools, the IP video system uses Activity Controlled Framerate (ACF), which controls the frame rate of the camera video stream based on the amount of motion in a scene. In the absence of activity, the video is streamed at a minimum framerate, but the moment activity is detected the framerate jumps to the configured maximum. This significantly reduces the amount of bandwidth needed. At each remote facility, fixed analog cameras are connected to transmitter nodules that convert the feed to high-quality digital video for transmission over the IP network. The system also integrates alarms with video surveillance. PIR intruder detectors are connected to the system via digital inputs on the transmitters. Advanced alarm-handling features in the Control Center software process the PIR detector alarms and alert operators to potential intrusions. This improves operator efficiency and incident response. 1 fig.

  15. Two-Stage Classification Approach for Human Detection in Camera Video in Bulk Ports

    Directory of Open Access Journals (Sweden)

    Mi Chao

    2015-09-01

    Full Text Available With the development of automation in ports, the video surveillance systems with automated human detection begun to be applied in open-air handling operation areas for safety and security. The accuracy of traditional human detection based on the video camera is not high enough to meet the requirements of operation surveillance. One of the key reasons is that Histograms of Oriented Gradients (HOG features of the human body will show great different between front & back standing (F&B and side standing (Side human body. Therefore, the final training for classifier will only gain a few useful specific features which have contribution to classification and are insufficient to support effective classification, while using the HOG features directly extracted by the samples from different human postures. This paper proposes a two-stage classification method to improve the accuracy of human detection. In the first stage, during preprocessing classification, images is mainly divided into possible F&B human body and not F&B human body, and then they were put into the second-stage classification among side human and non-human recognition. The experimental results in Tianjin port show that the two-stage classifier can improve the classification accuracy of human detection obviously.

  16. A cooperative control algorithm for camera based observational systems.

    Energy Technology Data Exchange (ETDEWEB)

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  17. A preliminary study to estimate contact rates between free-roaming domestic dogs using novel miniature cameras.

    Directory of Open Access Journals (Sweden)

    Courtenay B Bombara

    Full Text Available Information on contacts between individuals within a population is crucial to inform disease control strategies, via parameterisation of disease spread models. In this study we investigated the use of dog-borne video cameras-in conjunction with global positioning systems (GPS loggers-to both characterise dog-to-dog contacts and to estimate contact rates. We customized miniaturised video cameras, enclosed within 3D-printed plastic cases, and attached these to nylon dog collars. Using two 3400 mAh NCR lithium Li-ion batteries, cameras could record a maximum of 22 hr of continuous video footage. Together with a GPS logger, collars were attached to six free roaming domestic dogs (FRDDs in two remote Indigenous communities in northern Australia. We recorded a total of 97 hr of video footage, ranging from 4.5 to 22 hr (mean 19.1 per dog, and observed a wide range of social behaviours. The majority (69% of all observed interactions between community dogs involved direct physical contact. Direct contact behaviours included sniffing, licking, mouthing and play fighting. No contacts appeared to be aggressive, however multiple teeth baring incidents were observed during play fights. We identified a total of 153 contacts-equating to 8 to 147 contacts per dog per 24 hr-from the videos of the five dogs with camera data that could be analysed. These contacts were attributed to 42 unique dogs (range 1 to 19 per video which could be identified (based on colour patterns and markings. Most dog activity was observed in urban (houses and roads environments, but contacts were more common in bushland and beach environments. A variety of foraging behaviours were observed, included scavenging through rubbish and rolling on dead animal carcasses. Identified food consumed included chicken, raw bones, animal carcasses, rubbish, grass and cheese. For characterising contacts between FRDD, several benefits of analysing videos compared to GPS fixes alone were identified in

  18. Specialized video systems for use in waste tanks

    International Nuclear Information System (INIS)

    Anderson, E.K.; Robinson, C.W.; Heckendorn, F.M.

    1992-01-01

    The Robotics Development Group at the Savannah River Site is developing a remote video system for use in underground radioactive waste storage tanks at the Savannah River Site, as a portion of its site support role. Viewing of the tank interiors and their associated annular spaces is an extremely valuable tool in assessing their condition and controlling their operation. Several specialized video systems have been built that provide remote viewing and lighting, including remotely controlled tank entry and exit. Positioning all control components away from the facility prevents the potential for personnel exposure to radiation and contamination. The SRS waste tanks are nominal 4.5 million liter (1.3 million gallon) underground tanks used to store liquid high level radioactive waste generated by the site, awaiting final disposal. The typical waste tank (Figure 1) is of flattened shape (i.e. wider than high). The tanks sit in a dry secondary containment pan. The annular space between the tank wall and the secondary containment wall is continuously monitored for liquid intrusion and periodically inspected and documented. The latter was historically accomplished with remote still photography. The video systems includes camera, zoom lens, camera positioner, and vertical deployment. The assembly enters through a 125 mm (5 in) diameter opening. A special attribute of the systems is they never get larger than the entry hole during camera aiming etc. and can always be retrieved. The latest systems are easily deployable to a remote setup point and can extend down vertically 15 meters (50ft). The systems are expected to be a valuable asset to tank operations

  19. Computing camera heading: A study

    Science.gov (United States)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  20. A positioning system for forest diseases and pests based on GIS and PTZ camera

    International Nuclear Information System (INIS)

    Wang, Z B; Zhao, F F; Wang, C B; Wang, L L

    2014-01-01

    Forest diseases and pests cause enormous economic losses and ecological damage every year in China. To prevent and control forest diseases and pests, the key is to get accurate information timely. In order to improve monitoring coverage rate and economize on manpower, a cooperative investigation model for forest diseases and pests is put forward. It is composed of video positioning system and manual labor reconnaissance with mobile GIS embedded in PDA. Video system is used to scan the disaster area, and is particularly effective on where trees are withered. Forest diseases prevention and control workers can check disaster area with PDA system. To support this investigation model, we developed a positioning algorithm and a positioning system. The positioning algorithm is based on DEM and PTZ camera. Moreover, the algorithm accuracy is validated. The software consists of 3D GIS subsystem, 2D GIS subsystem, video control subsystem and disaster positioning subsystem. 3D GIS subsystem makes positioning visual, and practically easy to operate. 2D GIS subsystem can output disaster thematic map. Video control subsystem can change Pan/Tilt/Zoom of a digital camera remotely, to focus on the suspected area. Disaster positioning subsystem implements the positioning algorithm. It is proved that the positioning system can observe forest diseases and pests in practical application for forest departments

  1. State of the art in video system performance

    Science.gov (United States)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  2. Sending Safety Video over WiMAX in Vehicle Communications

    Directory of Open Access Journals (Sweden)

    Jun Steed Huang

    2013-10-01

    Full Text Available This paper reports on the design of an OPNET simulation platform to test the performance of sending real-time safety video over VANET (Vehicular Adhoc NETwork using the WiMAX technology. To provide a more realistic environment for streaming real-time video, a video model was created based on the study of video traffic traces captured from a realistic vehicular camera, and different design considerations were taken into account. A practical controller over real-time streaming protocol is implemented to control data traffic congestion for future road safety development. Our driving video model was then integrated with the WiMAX OPNET model along with a mobility model based on real road maps. Using this simulation platform, different mobility cases have been studied and the performance evaluated in terms of end-to-end delay, jitter and visual experience.

  3. Video Enhancement and Dynamic Range Control of HDR Sequences for Automotive Applications

    Directory of Open Access Journals (Sweden)

    Giovanni Ramponi

    2007-01-01

    Full Text Available CMOS video cameras with high dynamic range (HDR output are particularly suitable for driving assistance applications, where lighting conditions can strongly vary, going from direct sunlight to dark areas in tunnels. However, common visualization devices can only handle a low dynamic range, and thus a dynamic range reduction is needed. Many algorithms have been proposed in the literature to reduce the dynamic range of still pictures. Anyway, extending the available methods to video is not straightforward, due to the peculiar nature of video data. We propose an algorithm for both reducing the dynamic range of video sequences and enhancing its appearance, thus improving visual quality and reducing temporal artifacts. We also provide an optimized version of our algorithm for a viable hardware implementation on an FPGA. The feasibility of this implementation is demonstrated by means of a case study.

  4. Dynamic Artificial Potential Fields for Autonomous Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    the implementation and evaluation of Artificial Potential Fields for automatic camera placement. We first describe the re- casting of the frame composition problem as a solution to a two particles suspended in an Artificial Potential Field. We demonstrate the application of this technique to control both camera...

  5. Head-camera video recordings of trauma core competency procedures can evaluate surgical resident's technical performance as well as colocated evaluators.

    Science.gov (United States)

    Mackenzie, Colin F; Pasley, Jason; Garofalo, Evan; Shackelford, Stacy; Chen, Hegang; Longinaker, Nyaradzo; Granite, Guinevere; Pugh, Kristy; Hagegeorge, George; Tisherman, Samuel A

    2017-07-01

    Unbiased evaluation of trauma core competency procedures is necessary to determine if residency and predeployment training courses are useful. We tested whether a previously validated individual procedure score (IPS) for individual procedure vascular exposure and fasciotomy (FAS) performance skills could discriminate training status by comparing IPS of evaluators colocated with surgeons to blind video evaluations. Performance of axillary artery (AA), brachial artery (BA), and femoral artery (FA) vascular exposures and lower extremity FAS on fresh cadavers by 40 PGY-2 to PGY-6 residents was video-recorded from head-mounted cameras. Two colocated trained evaluators assessed IPS before and after training. One surgeon in each pretraining tertile of IPS for each procedure was randomly identified for blind video review. The same 12 surgeons were video-recorded repeating the procedures less than 4 weeks after training. Five evaluators independently reviewed all 96 randomly arranged deidentified videos. Inter-rater reliability/consistency, intraclass correlation coefficients were compared by colocated versus video review of IPS, and errors. Study methodology and bias were judged by Medical Education Research Study Quality Instrument and the Quality Assessment of Diagnostic Accuracy Studies criteria. There were no differences (p ≥ 0.5) in IPS for AA, FA, FAS, whether evaluators were colocated or reviewed video recordings. Evaluator consistency was 0.29 (BA) - 0.77 (FA). Video and colocated evaluators were in total agreement (p = 1.0) for error recognition. Intraclass correlation coefficient was 0.73 to 0.92, dependent on procedure. Correlations video versus colocated evaluations were 0.5 to 0.9. Except for BA, blinded video evaluators discriminated (p competency. Prognostic study, level II.

  6. Observation of the dynamic movement of fragmentations by high-speed camera and high-speed video

    Science.gov (United States)

    Suk, Chul-Gi; Ogata, Yuji; Wada, Yuji; Katsuyama, Kunihisa

    1995-05-01

    The experiments of blastings using mortal concrete blocks and model concrete columns were carried out in order to obtain technical information on fragmentation caused by the blasting demolition. The dimensions of mortal concrete blocks were 1,000 X 1,000 X 1,000 mm. Six kinds of experimental blastings were carried out using mortal concrete blocks. In these experiments precision detonators and No. 6 electric detonators with 10 cm detonating fuse were used and discussed the control of fragmentation. As the results of experiment it was clear that the flying distance of fragmentation can be controlled using a precise blasting system. The reinforced concrete model columns for typical apartment houses in Japan were applied to the experiments. The dimension of concrete test column was 800 X 800 X 2400 mm and buried 400 mm in the ground. The specified design strength of the concrete was 210 kgf/cm2. These columns were exploded by the blasting with internal loading of dynamite. The fragmentation were observed by two kinds of high speed camera with 500 and 2000 FPS and a high speed video with 400 FPS. As one of the results in the experiments, the velocity of fragmentation, blasted 330 g of explosive with the minimum resisting length of 0.32 m, was measured as much as about 40 m/s.

  7. Specialized video systems for use in underground storage tanks

    International Nuclear Information System (INIS)

    Heckendom, F.M.; Robinson, C.W.; Anderson, E.K.; Pardini, A.F.

    1994-01-01

    The Robotics Development Groups at the Savannah River Site and the Hanford site have developed remote video and photography systems for deployment in underground radioactive waste storage tanks at Department of Energy (DOE) sites as a part of the Office of Technology Development (OTD) program within DOE. Figure 1 shows the remote video/photography systems in a typical underground storage tank environment. Viewing and documenting the tank interiors and their associated annular spaces is an extremely valuable tool in characterizing their condition and contents and in controlling their remediation. Several specialized video/photography systems and robotic End Effectors have been fabricated that provide remote viewing and lighting. All are remotely deployable into and from the tank, and all viewing functions are remotely operated. Positioning all control components away from the facility prevents the potential for personnel exposure to radiation and contamination. Overview video systems, both monaural and stereo versions, include a camera, zoom lens, camera positioner, vertical deployment system, and positional feedback. Each independent video package can be inserted through a 100 mm (4 in.) diameter opening. A special attribute of these packages is their design to never get larger than the entry hole during operation and to be fully retrievable. The End Effector systems will be deployed on the large robotic Light Duty Utility Arm (LDUA) being developed by other portions of the OTD-DOE programs. The systems implement a multi-functional ''over the coax'' design that uses a single coaxial cable for all data and control signals over the more than 900 foot cable (or fiber optic) link

  8. Distributed Real-Time Embedded Video Processing

    National Research Council Canada - National Science Library

    Lv, Tiehan

    2004-01-01

    .... A deployable multi-camera video system must perform distributed computation, including computation near the camera as well as remote computations, in order to meet performance and power requirements...

  9. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  10. Multimodal Semantics Extraction from User-Generated Videos

    Directory of Open Access Journals (Sweden)

    Francesco Cricri

    2012-01-01

    Full Text Available User-generated video content has grown tremendously fast to the point of outpacing professional content creation. In this work we develop methods that analyze contextual information of multiple user-generated videos in order to obtain semantic information about public happenings (e.g., sport and live music events being recorded in these videos. One of the key contributions of this work is a joint utilization of different data modalities, including such captured by auxiliary sensors during the video recording performed by each user. In particular, we analyze GPS data, magnetometer data, accelerometer data, video- and audio-content data. We use these data modalities to infer information about the event being recorded, in terms of layout (e.g., stadium, genre, indoor versus outdoor scene, and the main area of interest of the event. Furthermore we propose a method that automatically identifies the optimal set of cameras to be used in a multicamera video production. Finally, we detect the camera users which fall within the field of view of other cameras recording at the same public happening. We show that the proposed multimodal analysis methods perform well on various recordings obtained in real sport events and live music performances.

  11. Video content analysis on body-worn cameras for retrospective investigation

    NARCIS (Netherlands)

    Bouma, H.; Baan, J.; Haar, F.B. ter; Eendebak, P.T.; Hollander, R.J.M. den; Burghouts, G.J.; Wijn, R.; Broek, S.P. van den; Rest, J.H.C. van

    2015-01-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications

  12. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    Science.gov (United States)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  13. Taking it all in : special camera films in 3-D

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, L.

    2006-07-15

    Details of a 360-degree digital camera designed by Immersive Media Telemmersion were presented. The camera has been employed extensively in the United States for homeland security and intelligence-gathering purposes. In Canada, the cameras are now being used by the oil and gas industry. The camera has 11 lenses pointing in all directions and generates high resolution movies that can be analyzed frame-by-frame from every angle. Global positioning satellite data can be gathered during filming so that operators can pinpoint any location. The 11 video streams use more than 100 million pixels per second. After filming, the system displays synchronized, high-resolution video streams, capturing a full motion spherical world complete with directional sound. It can be viewed on a computer monitor, video screen, or head-mounted display. Pembina Pipeline Corporation recently used the Telemmersion system to plot a proposed pipeline route between Alberta's Athabasca region and Edmonton. It was estimated that more than $50,000 was saved by using the camera. The resulting video has been viewed by Pembina's engineering, environmental and geotechnical groups who were able to accurately note the route's river crossings. The cameras were also used to estimate timber salvage. Footage was then given to the operations group, to help staff familiarize themselves with the terrain, the proposed route's right-of-way, and the number of water crossings and access points. Oil and gas operators have also used the equipment on a recently acquired block of land to select well sites. 4 figs.

  14. Video-rate or high-precision: a flexible range imaging camera

    Science.gov (United States)

    Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.

    2008-02-01

    A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.

  15. Feature Quantization and Pooling for Videos

    Science.gov (United States)

    2014-05-01

    less vertical motion. The exceptions are videos from the classes of biking (mainly due to the camera tracking fast bikers), jumping on a trampoline ...tracking the bikers; the jumping videos, featuring people on trampolines , the swing videos, which are usually recorded in profile view, and the walking

  16. A generic model for camera based intelligent road crowd control ...

    African Journals Online (AJOL)

    This research proposes a model for intelligent traffic flow control by implementing camera based surveillance and feedback system. A series of cameras are set minimum three signals ahead from the target junction. The complete software system is developed to help integrating the multiple camera on road as feedback to ...

  17. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    Science.gov (United States)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between

  18. VIDEO ETNOGRAFI: PENGALAMAN PENELITIAN SOSIAL DENGAN VIDEO KAMERA DI SULAWESI SELATAN

    Directory of Open Access Journals (Sweden)

    Moh Yasir Alimi

    2013-04-01

    Full Text Available Tujuan penelitian ini adalah untuk mengeksplorasi bagaimana video kamera dapat membantu focus pikiran peneliti, mempertajam naluri etnografi, memperbagus hubungan dengan masyarakat dan memperbagus laporan penelitian. Penelitian dilakukan di Sulawesi Selatan selama satu tahun dengan menggunakan handycam Sony DCR-TRV27, handycam Sony yang dilengkapi dengan fitur screen dan layar pengintip untuk siang hari. Kecanggihan teknologi kamera bertemu dengan kekayaan kehidupan social di Sulawesi Selatan, sebagaimana terefleksikan paling baik dalam upacara pernikahan. Video kamera secara sistematik mempunyai tempat yang penting dalam system knowledge/power local. Melakukan hal ini, artikel ini tidak hanya mendiskusikan tentang penggunaan kamera di praktek etnografis, tetapi juga mendiskusikan dunia sosial yang memungkinkan kamera mempunyai peran yang sangat penting didalamnya. In this article, the author explores how a video camera can refocus researcher’s mind, sharpens ethnographic senses, and improve relation with the community. This sophisticated feature of technology has met with the vibrancy of social life in South Sulawesi, best reflected in wedding ceremonies, placing camera nicely in the existing local system of knowledge/power. The research was done for one year using Sony handycam Sony DCR-TRV27. The study is drawn from a year fieldwork experience in South Sulawesi. This article does not only discuss  the use of camera in an ethnographic practice, but also discusses the social world that has enabled the camera to have a central place in it.

  19. Laser scanning camera inspects hazardous area

    International Nuclear Information System (INIS)

    Fryatt, A.; Miprode, C.

    1985-01-01

    Main operational characteristics of a new laser scanning camera are presented. The camera is intended primarily for low level high resolution viewing inside nuclear reactors. It uses a He-Ne laser beam raster; by detecting the reflected light by means of a phomultiplier, the subject under observation can be reconstructed in an electronic video store and reviewed on a conventional monitor screen

  20. What Counts as Educational Video?: Working toward Best Practice Alignment between Video Production Approaches and Outcomes

    Science.gov (United States)

    Winslett, Greg

    2014-01-01

    The twenty years since the first digital video camera was made commercially available has seen significant increases in the use of low-cost, amateur video productions for teaching and learning. In the same period, production and consumption of professionally produced video has also increased, as has the distribution platforms to access it.…

  1. Hierarchical video surveillance architecture: a chassis for video big data analytics and exploration

    Science.gov (United States)

    Ajiboye, Sola O.; Birch, Philip; Chatwin, Christopher; Young, Rupert

    2015-03-01

    There is increasing reliance on video surveillance systems for systematic derivation, analysis and interpretation of the data needed for predicting, planning, evaluating and implementing public safety. This is evident from the massive number of surveillance cameras deployed across public locations. For example, in July 2013, the British Security Industry Association (BSIA) reported that over 4 million CCTV cameras had been installed in Britain alone. The BSIA also reveal that only 1.5% of these are state owned. In this paper, we propose a framework that allows access to data from privately owned cameras, with the aim of increasing the efficiency and accuracy of public safety planning, security activities, and decision support systems that are based on video integrated surveillance systems. The accuracy of results obtained from government-owned public safety infrastructure would improve greatly if privately owned surveillance systems `expose' relevant video-generated metadata events, such as triggered alerts and also permit query of a metadata repository. Subsequently, a police officer, for example, with an appropriate level of system permission can query unified video systems across a large geographical area such as a city or a country to predict the location of an interesting entity, such as a pedestrian or a vehicle. This becomes possible with our proposed novel hierarchical architecture, the Fused Video Surveillance Architecture (FVSA). At the high level, FVSA comprises of a hardware framework that is supported by a multi-layer abstraction software interface. It presents video surveillance systems as an adapted computational grid of intelligent services, which is integration-enabled to communicate with other compatible systems in the Internet of Things (IoT).

  2. VAP/VAT: video analytics platform and test bed for testing and deploying video analytics

    Science.gov (United States)

    Gorodnichy, Dmitry O.; Dubrofsky, Elan

    2010-04-01

    Deploying Video Analytics in operational environments is extremely challenging. This paper presents a methodological approach developed by the Video Surveillance and Biometrics Section (VSB) of the Science and Engineering Directorate (S&E) of the Canada Border Services Agency (CBSA) to resolve these problems. A three-phase approach to enable VA deployment within an operational agency is presented and the Video Analytics Platform and Testbed (VAP/VAT) developed by the VSB section is introduced. In addition to allowing the integration of third party and in-house built VA codes into an existing video surveillance infrastructure, VAP/VAT also allows the agency to conduct an unbiased performance evaluation of the cameras and VA software available on the market. VAP/VAT consists of two components: EventCapture, which serves to Automatically detect a "Visual Event", and EventBrowser, which serves to Display & Peruse of "Visual Details" captured at the "Visual Event". To deal with Open architecture as well as with Closed architecture cameras, two video-feed capture mechanisms have been developed within the EventCapture component: IPCamCapture and ScreenCapture.

  3. Deep-Sky Video Astronomy

    CERN Document Server

    Massey, Steve

    2009-01-01

    A guide to using modern integrating video cameras for deep-sky viewing and imaging with the kinds of modest telescopes available commercially to amateur astronomers. It includes an introduction and a brief history of the technology and camera types. It examines the pros and cons of this unrefrigerated yet highly efficient technology

  4. Preliminary condensation pool experiments with steam using DN80 and DN100 blowdown pipes[VIDEO CAMERAS

    Energy Technology Data Exchange (ETDEWEB)

    Laine, J.; Puustinen, M. [Lappeenranta University of Technology (Finland)

    2004-03-01

    The report summarizes the results of the preliminary steam blowdown experiments. Altogether eight experiment series, each consisting of several steam blows, were carried out in autumn 2003 with a scaled-down condensation pool test rig designed and constructed at Lappeenranta University of Technology. The main purpose of the experiments was to evaluate the capabilities of the test rig and the needs for measurement and visualization devices. The experiments showed that a high-speed video camera is essential for visual observation due to the rapid condensation of steam bubbles. Furthermore, the maximum measurement frequency of the current combination of instrumentation and data acquisition system is inadequate for the actual steam tests in 2004. (au)

  5. Event detection intelligent camera development

    International Nuclear Information System (INIS)

    Szappanos, A.; Kocsis, G.; Molnar, A.; Sarkozi, J.; Zoletnik, S.

    2008-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized for digital signal processor (DSP) and field programmable gate array (FPGA) architectures. At full resolution a camera sends 12 bit sampled 1280 x 1024 pixels with 444 fps which means 1.43 Terabyte over half an hour. To analyse such a huge amount of data is time consuming and has a high computational complexity. We plan to overcome this problem by EDICAM's preprocessing concepts. EDICAM camera system integrates all the advantages of CMOS sensor chip technology and fast network connections. EDICAM is built up from three different modules with two interfaces. A sensor module (SM) with reduced hardware and functional elements to reach a small and compact size and robust action in harmful environment as well. An image processing and control unit (IPCU) module handles the entire user predefined events and runs image processing algorithms to generate trigger signals. Finally a 10 Gigabit Ethernet compatible image readout card functions as the network interface for the PC. In this contribution all the concepts of EDICAM and the functions of the distinct modules are described

  6. Standardized access, display, and retrieval of medical video

    Science.gov (United States)

    Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.

    1999-05-01

    The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.

  7. Mobile Video in Everyday Social Interactions

    Science.gov (United States)

    Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

    Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

  8. Physics and Video Analysis

    Science.gov (United States)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  9. Soft x-ray camera for internal shape and current density measurements on a noncircular tokamak

    International Nuclear Information System (INIS)

    Fonck, R.J.; Jaehnig, K.P.; Powell, E.T.; Reusch, M.; Roney, P.; Simon, M.P.

    1988-05-01

    Soft x-ray measurements of the internal plasma flux surface shaped in principle allow a determination of the plasma current density distribution, and provide a necessary monitor of the degree of internal elongation of tokamak plasmas with a noncircular cross section. A two-dimensional, tangentially viewing, soft x-ray pinhole camera has been fabricated to provide internal shape measurements on the PBX-M tokamak. It consists of a scintillator at the focal plane of a foil-filtered pinhole camera, which is, in turn, fiber optically coupled to an intensified framing video camera (/DELTA/t />=/ 3 msec). Automated data acquisition is performed on a stand-alone image-processing system, and data archiving and retrieval takes place on an optical disk video recorder. The entire diagnostic is controlled via a PDP-11/73 microcomputer. The derivation of the polodial emission distribution from the measured image is done by fitting to model profiles. 10 refs., 4 figs

  10. Feasibility Study of Utilization of Action Camera, GoPro Hero 4, Google Glass, and Panasonic HX-A100 in Spine Surgery.

    Science.gov (United States)

    Lee, Chang Kyu; Kim, Youngjun; Lee, Nam; Kim, Byeongwoo; Kim, Doyoung; Yi, Seong

    2017-02-15

    Study for feasibility of commercially available action cameras in recording video of spine. Recent innovation of the wearable action camera with high-definition video recording enables surgeons to use camera in the operation at ease without high costs. The purpose of this study is to compare the feasibility, safety, and efficacy of commercially available action cameras in recording video of spine surgery. There are early reports of medical professionals using Google Glass throughout the hospital, Panasonic HX-A100 action camera, and GoPro. This study is the first report for spine surgery. Three commercially available cameras were tested: GoPro Hero 4 Silver, Google Glass, and Panasonic HX-A100 action camera. Typical spine surgery was selected for video recording; posterior lumbar laminectomy and fusion. Three cameras were used by one surgeon and video was recorded throughout the operation. The comparison was made on the perspective of human factor, specification, and video quality. The most convenient and lightweight device for wearing and holding throughout the long operation time was Google Glass. The image quality; all devices except Google Glass supported HD format and GoPro has unique 2.7K or 4K resolution. Quality of video resolution was best in GoPro. Field of view, GoPro can adjust point of interest, field of view according to the surgery. Narrow FOV option was the best for recording in GoPro to share the video clip. Google Glass has potentials by using application programs. Connectivity such as Wi-Fi and Bluetooth enables video streaming for audience, but only Google Glass has two-way communication feature in device. Action cameras have the potential to improve patient safety, operator comfort, and procedure efficiency in the field of spinal surgery and broadcasting a surgery with development of the device and applied program in the future. N/A.

  11. Range-Measuring Video Sensors

    Science.gov (United States)

    Howard, Richard T.; Briscoe, Jeri M.; Corder, Eric L.; Broderick, David

    2006-01-01

    Optoelectronic sensors of a proposed type would perform the functions of both electronic cameras and triangulation- type laser range finders. That is to say, these sensors would both (1) generate ordinary video or snapshot digital images and (2) measure the distances to selected spots in the images. These sensors would be well suited to use on robots that are required to measure distances to targets in their work spaces. In addition, these sensors could be used for all the purposes for which electronic cameras have been used heretofore. The simplest sensor of this type, illustrated schematically in the upper part of the figure, would include a laser, an electronic camera (either video or snapshot), a frame-grabber/image-capturing circuit, an image-data-storage memory circuit, and an image-data processor. There would be no moving parts. The laser would be positioned at a lateral distance d to one side of the camera and would be aimed parallel to the optical axis of the camera. When the range of a target in the field of view of the camera was required, the laser would be turned on and an image of the target would be stored and preprocessed to locate the angle (a) between the optical axis and the line of sight to the centroid of the laser spot.

  12. Using a laser scanning camera for reactor inspection

    International Nuclear Information System (INIS)

    Armour, I.A.; Adrain, R.S.; Klewe, R.C.

    1984-01-01

    Inspection of nuclear reactors is normally carried out using TV or film cameras. There are, however, several areas where these cameras show considerable shortcomings. To overcome these difficulties, laser scanning cameras have been developed. This type of camera can be used for general visual inspection as well as the provision of high resolution video images with high ratio on and off-axis zoom capability. In this paper, we outline the construction and operation of a laser scanning camera and give examples of how it has been used in various power stations, and indicate future potential developments. (author)

  13. Noise aliasing in interline-video-based fluoroscopy systems

    International Nuclear Information System (INIS)

    Lai, H.; Cunningham, I.A.

    2002-01-01

    Video-based imaging systems for continuous (nonpulsed) x-ray fluoroscopy use a variety of video formats. Conventional video-camera systems may operate in either interlaced or progressive-scan modes, and CCD systems may operate in interline- or frame-transfer modes. A theoretical model of the image noise power spectrum corresponding to these formats is described. It is shown that with respect to frame-transfer or progressive-readout modes, interline or interlaced cameras operating in a frame-integration mode will result in a spectral shift of 25% of the total image noise power from low spatial frequencies to high. In a field-integration mode, noise power is doubled with most of the increase occurring at high spatial frequencies. The differences are due primarily to the effect of noise aliasing. In interline or interlaced formats, alternate lines are obtained with each video field resulting in a vertical sampling frequency for noise that is one half of the physical sampling frequency. The extent of noise aliasing is modified by differences in the statistical correlations between video fields in the different modes. The theoretical model is validated with experiments using an x-ray image intensifier and CCD-camera system. It is shown that different video modes affect the shape of the noise-power spectrum and therefore the detective quantum efficiency. While the effect on observer performance is not addressed, it is concluded that in order to minimize image noise at the critical mid-to-high spatial frequencies for a specified x-ray exposure, fluoroscopic systems should use only frame-transfer (CCD camera) or progressive-scan (conventional video) formats

  14. A Taxonomy of Asynchronous Instructional Video Styles

    Science.gov (United States)

    Chorianopoulos, Konstantinos

    2018-01-01

    Many educational organizations are employing instructional videos in their pedagogy, but there is a limited understanding of the possible video formats. In practice, the presentation format of instructional videos ranges from direct recording of classroom teaching with a stationary camera, or screencasts with voiceover, to highly elaborate video…

  15. Scintillation camera-computer systems: General principles of quality control

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    Scintillation camera-computer systems are designed to allow the collection, digital analysis and display of the image data from a scintillation camera. The components of the computer in such a system are essentially the same as those of a computer used in any other application, i.e. a central processing unit (CPU), memory and magnetic storage. Additional hardware items necessary for nuclear medicine applications are an analogue-to-digital converter (ADC), which converts the analogue signals from the camera to digital numbers, and an image display. It is possible that the transfer of data from camera to computer degrades the information to some extent. The computer can generate the image for display, but it also provides the capability of manipulating the primary data to improve the display of the image. The first function of conversion from analogue to digital mode is not within the control of the operator, but the second type of manipulation is in the control of the operator. These type of manipulations should be done carefully without sacrificing the integrity of the incoming information

  16. Toward standardising gamma camera quality control procedures

    International Nuclear Information System (INIS)

    Alkhorayef, M.A.; Alnaaimi, M.A.; Alduaij, M.A.; Mohamed, M.O.; Ibahim, S.Y.; Alkandari, F.A.; Bradley, D.A.

    2015-01-01

    Attaining high standards of efficiency and reliability in the practice of nuclear medicine requires appropriate quality control (QC) programs. For instance, the regular evaluation and comparison of extrinsic and intrinsic flood-field uniformity enables the quick correction of many gamma camera problems. Whereas QC tests for uniformity are usually performed by exposing the gamma camera crystal to a uniform flux of gamma radiation from a source of known activity, such protocols can vary significantly. Thus, there is a need for optimization and standardization, in part to allow direct comparison between gamma cameras from different vendors. In the present study, intrinsic uniformity was examined as a function of source distance, source activity, source volume and number of counts. The extrinsic uniformity and spatial resolution were also examined. Proper standard QC procedures need to be implemented because of the continual development of nuclear medicine imaging technology and the rapid expansion and increasing complexity of hybrid imaging system data. The present work seeks to promote a set of standard testing procedures to contribute to the delivery of safe and effective nuclear medicine services. - Highlights: • Optimal parameters for quality control of the gamma camera are proposed. • For extrinsic and intrinsic uniformity a minimum of 15,000 counts is recommended. • For intrinsic flood uniformity the activity should not exceed 100 µCi (3.7 MBq). • For intrinsic uniformity the source to detector distance should be at least 60 cm. • The bar phantom measurement must be performed with at least 15 million counts.

  17. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    Science.gov (United States)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  18. Active spectral imaging nondestructive evaluation (SINDE) camera

    Energy Technology Data Exchange (ETDEWEB)

    Simova, E.; Rochefort, P.A., E-mail: eli.simova@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)

    2016-06-15

    A proof-of-concept video camera for active spectral imaging nondestructive evaluation has been demonstrated. An active multispectral imaging technique has been implemented in the visible and near infrared by using light emitting diodes with wavelengths spanning from 400 to 970 nm. This shows how the camera can be used in nondestructive evaluation to inspect surfaces and spectrally identify materials and corrosion. (author)

  19. Accurate estimation of camera shot noise in the real-time

    Science.gov (United States)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the

  20. Surgeon-Manipulated Live Surgery Video Recording Apparatuses: Personal Experience and Review of Literature.

    Science.gov (United States)

    Kapi, Emin

    2017-06-01

    Visual recording of surgical procedures is a method that is used quite frequently in practices of plastic surgery. While presentations containing photographs are quite common in education seminars and congresses, video-containing presentations find more favour. For this reason, the presentation of surgical procedures in the form of real-time video display has increased especially recently. Appropriate technical equipment for video recording is not available in most hospitals, so there is a need to set up external apparatus in the operating room. Among these apparatuses can be listed such options as head-mounted video cameras, chest-mounted cameras, and tripod-mountable cameras. The head-mounted video camera is an apparatus that is capable of capturing high-resolution and detailed close-up footage. The tripod-mountable camera enables video capturing from a fixed point. Certain user-specific modifications can be made to overcome some of these restrictions. Among these modifications, custom-made applications are one of the most effective solutions. The article makes an attempt to present the features and experiences concerning the use of a combination of a head- or chest-mounted action camera, a custom-made portable tripod apparatus of versatile features, and an underwater camera. The descriptions we used are quite easy-to-assembly, quickly installed, and inexpensive apparatuses that do not require specific technical knowledge and can be manipulated by the surgeon personally in all procedures. The author believes that video recording apparatuses will be integrated more to the operating room, become a standard practice, and become more enabling for self-manipulation by the surgeon in the near future. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  1. Intelligent keyframe extraction for video printing

    Science.gov (United States)

    Zhang, Tong

    2004-10-01

    Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.

  2. Digital quality control of the camera computer interface

    International Nuclear Information System (INIS)

    Todd-Pokropek, A.

    1983-01-01

    A brief description is given of how the gamma camera-computer interface works and what kind of errors can occur. Quality control tests of the interface are then described which include 1) tests of static performance e.g. uniformity, linearity, 2) tests of dynamic performance e.g. basic timing, interface count-rate, system count-rate, 3) tests of special functions e.g. gated acquisition, 4) tests of the gamma camera head, and 5) tests of the computer software. The tests described are mainly acceptance and routine tests. Many of the tests discussed are those recommended by an IAEA Advisory Group for inclusion in the IAEA control schedules for nuclear medicine instrumentation. (U.K.)

  3. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    Science.gov (United States)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  4. Effectance and control as determinants of video game enjoyment.

    Science.gov (United States)

    Klimmt, Christoph; Hartmann, Tilo; Frey, Andreas

    2007-12-01

    This article explores video game enjoyment originated by games' key characteristic, interactivity. An online experiment (N=500) tested experiences of effectance (perceived influence on the game world) and of being in control as mechanisms that link interactivity to enjoyment. A video game was manipulated to either allow normal play, reduce perceived effectance, or reduce perceived control. Enjoyment ratings suggest that effectance is an important factor in video game enjoyment but that the relationship between control of the game situation and enjoyment is more complex.

  5. Optimization of thermal waste treatment with the INSPECT system by camera based characteristic values and Fuzzy Control; Optimierung der thermischen Abfallbehandlung mit dem INSPECT-System durch kamerabasierte Kenngroessen und Fuzzy Control

    Energy Technology Data Exchange (ETDEWEB)

    Keller, H.B.; Matthes, J. [Forschungszentrum Karlsruhe GmbH, Eggenstein-Leopoldshafen (Germany). Inst. fuer Angewandte Informatik; Schoenecker, H.; Krakau, T. [ci-Tec GmbH, Karlsruhe (Germany)

    2007-07-01

    The application of modern procedures for control and regulation at industrial procedures requires the knowledge of important process variables which are not measurable on-line. Optical measurements by means of video cameras and infrared cameras enable a non-destructive observation of the combustion process in locally different developments. Thus, process variables can be determined by means of special software of image processing in real time. For grate firings and rotating kilns, INSPECT applications are developed. The characteristics computed thereby directly can be used by means of a Fuzzy control module or by external systems for the optimization of the firing regulation. The enormous potential of the optimization with the INSPECT system impressively is confirmed by the past installations and by the process improvements obtained thereby. The INSPECT system was developed in co-operation with Ci-Tec GmbH (Ratingen, Federal Republic of Germany) and marketed, maintained and further developed by Ci-Tec GmbH.

  6. C.C.D. readout of a picosecond streak camera with an intensified C.C.D

    International Nuclear Information System (INIS)

    Lemonier, M.; Richard, J.C.; Cavailler, C.; Mens, A.; Raze, G.

    1984-08-01

    This paper deals with a digital streak camera readout device. The device consists in a low light level television camera made of a solid state C.C.D. array coupled to an image intensifier associated to a video-digitizer coupled to a micro-computer system. The streak camera images are picked-up as a video signal, digitized and stored. This system allows the fast recording and the automatic processing of the data provided by the streak tube

  7. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  8. Intelligent viewing control for robotic and automation systems

    Science.gov (United States)

    Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.

    1994-10-01

    We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.

  9. Ultrahigh-speed, high-sensitivity color camera with 300,000-pixel single CCD

    Science.gov (United States)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Ohtake, H.; Kurita, T.; Tanioka, K.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Etoh, T. G.

    2007-01-01

    We have developed an ultrahigh-speed, high-sensitivity portable color camera with a new 300,000-pixel single CCD. The 300,000-pixel CCD, which has four times the number of pixels of our initial model, was developed by seamlessly joining two 150,000-pixel CCDs. A green-red-green-blue (GRGB) Bayer filter is used to realize a color camera with the single-chip CCD. The camera is capable of ultrahigh-speed video recording at up to 1,000,000 frames/sec, and small enough to be handheld. We also developed a technology for dividing the CCD output signal to enable parallel, highspeed readout and recording in external memory; this makes possible long, continuous shots up to 1,000 frames/second. As a result of an experiment, video footage was imaged at an athletics meet. Because of high-speed shooting, even detailed movements of athletes' muscles were captured. This camera can capture clear slow-motion videos, so it enables previously impossible live footage to be imaged for various TV broadcasting programs.

  10. Geometric database maintenance using CCTV cameras and overlay graphics

    Science.gov (United States)

    Oxenberg, Sheldon C.; Landell, B. Patrick; Kan, Edwin

    1988-01-01

    An interactive graphics system using closed circuit television (CCTV) cameras for remote verification and maintenance of a geometric world model database has been demonstrated in GE's telerobotics testbed. The database provides geometric models and locations of objects viewed by CCTV cameras and manipulated by telerobots. To update the database, an operator uses the interactive graphics system to superimpose a wireframe line drawing of an object with known dimensions on a live video scene containing that object. The methodology used is multipoint positioning to easily superimpose a wireframe graphic on the CCTV image of an object in the work scene. An enhanced version of GE's interactive graphics system will provide the object designation function for the operator control station of the Jet Propulsion Laboratory's telerobot demonstration system.

  11. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    Science.gov (United States)

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  12. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  13. A novel super-resolution camera model

    Science.gov (United States)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  14. Adaptive Neural-Sliding Mode Control of Active Suspension System for Camera Stabilization

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-01-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to the unintentional vibrations caused by road roughness. This paper presents a novel adaptive neural network based on sliding mode control strategy to stabilize the image captured area of the camera. The purpose is to suppress vertical displacement of sprung mass with the application of active suspension system. Since the active suspension system has nonlinear and time varying characteristics, adaptive neural network (ANN is proposed to make the controller robustness against systematic uncertainties, which release the model-based requirement of the sliding model control, and the weighting matrix is adjusted online according to Lyapunov function. The control system consists of two loops. The outer loop is a position controller designed with sliding mode strategy, while the PID controller in the inner loop is to track the desired force. The closed loop stability and asymptotic convergence performance can be guaranteed on the basis of the Lyapunov stability theory. Finally, the simulation results show that the employed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  15. Technical assessment of Navitar Zoom 6000 optic and Sony HDC-X310 camera for MEMS presentations and training.

    Energy Technology Data Exchange (ETDEWEB)

    Diegert, Carl F.

    2006-02-01

    This report evaluates a newly-available, high-definition, video camera coupled with a zoom optical system for microscopic imaging of micro-electro-mechanical systems. We did this work to support configuration of three document-camera-like stations as part of an installation in a new Microsystems building at Sandia National Laboratories. The video display walls to be installed as part of these three presentation and training stations are of extraordinary resolution and quality. The new availability of a reasonably-priced, cinema-quality, high-definition video camera offers the prospect of filling these displays with full-motion imaging of Sandia's microscopic products at a quality substantially beyond the quality of typical video microscopes. Simple and robust operation of the microscope stations will allow the extraordinary-quality imaging to contribute to Sandia's day-to-day research and training operations. This report illustrates the disappointing image quality from a camera/lens system comprised of a Sony HDC-X310 high-definition video camera coupled to a Navitar Zoom 6000 lens. We determined that this Sony camera is capable of substantially more image quality than the Navitar optic can deliver. We identified an optical doubler lens from Navitar as the component of their optical system that accounts for a substantial part of the image quality problem. While work continues to incrementally improve performance of the Navitar system, we are also evaluating optical systems from other vendors to couple to this Sony camera.

  16. Video quality of 3G videophones for telephone cardiopulmonary resuscitation.

    Science.gov (United States)

    Tränkler, Uwe; Hagen, Oddvar; Horsch, Alexander

    2008-01-01

    We simulated a cardiopulmonary resuscitation (CPR) scene with a manikin and used two 3G videophones on the caller's side to transmit video to a laptop PC. Five observers (two doctors with experience in emergency medicine and three paramedics) evaluated the video. They judged whether the manikin was breathing and whether they would give advice for CPR; they also graded the confidence of their decision-making. Breathing was only visible from certain orientations of the videophones, at distances below 150 cm with good illumination and a still background. Since the phones produced a degradation in colours and shadows, detection of breathing mainly depended on moving contours. Low camera positioning produced better results than having the camera high up. Darkness, shaking of the camera and a moving background made detection of breathing almost impossible. The video from the two 3G videophones that were tested was of sufficient quality for telephone CPR provided that camera orientation, distance, illumination and background were carefully chosen. Thus it seems possible to use 3G videophones for emergency calls involving CPR. However, further studies on the required video quality in different scenarios are necessary.

  17. Low-cost teleoperator-controlled vehicle for damage assessment and radiation dose measurement

    International Nuclear Information System (INIS)

    Tyree, W.H.

    1991-01-01

    A low-cost, disposable, radio-controlled, remote-reading, ionizing radiation and surveillance teleoperator re-entry vehicle has been built. The vehicle carries equipment, measures radiation levels, and evaluates building conditions. The basic vehicle, radio control with amplifiers, telemetry, elevator, and video camera with monitor cost less than $2500. Velcro-mounted alpha, beta-gamma, and neutron sensing equipment is used in the present system. Many types of health physics radiation measuring equipment may be substituted on the vehicle. The system includes a black-and-white video camera to observe the environment surrounding the vehicle. The camera is mounted on a vertical elevator extendible to 11 feet above the floor. The present vehicle uses a video camera with an umbilical cord between the vehicle and the operators. Preferred operation would eliminate the umbilical. Video monitoring equipment is part of the operator control system. Power for the vehicle equipment is carried on board and supplied by sealed lead-acid batteries. Radios are powered by 9-V alkaline batteries. The radio control receiver, servo drivers, high-power amplifier and 49-MHz FM transceivers were irradiated at moderate rates with neutron and gamma doses to 3000 Rem and 300 Rem, respectively, to ensure system operation

  18. Robust Visual Control of Parallel Robots under Uncertain Camera Orientation

    Directory of Open Access Journals (Sweden)

    Miguel A. Trujano

    2012-10-01

    Full Text Available This work presents a stability analysis and experimental assessment of a visual control algorithm applied to a redundant planar parallel robot under uncertainty in relation to camera orientation. The key feature of the analysis is a strict Lyapunov function that allows the conclusion of asymptotic stability without invoking the Barbashin-Krassovsky-LaSalle invariance theorem. The controller does not rely on velocity measurements and has a structure similar to a classic Proportional Derivative control algorithm. Experiments in a laboratory prototype show that uncertainty in camera orientation does not significantly degrade closed-loop performance.

  19. Application of robust face recognition in video surveillance systems

    Science.gov (United States)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  20. Ground Validation Drop Camera Transect Points - St. Thomas/ St. John USVI - 2011

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video were collected between...

  1. Adaptive neural networks control for camera stabilization with active suspension system

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-08-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to unintentional vibrations caused by road roughness. This article presents an adaptive neural network approach mixed with linear quadratic regulator control for a quarter-car active suspension system to stabilize the image captured area of the camera. An active suspension system provides extra force through the actuator which allows it to suppress vertical vibration of sprung mass. First, to deal with the road disturbance and the system uncertainties, radial basis function neural network is proposed to construct the map between the state error and the compensation component, which can correct the optimal state-feedback control law. The weights matrix of radial basis function neural network is adaptively tuned online. Then, the closed-loop stability and asymptotic convergence performance is guaranteed by Lyapunov analysis. Finally, the simulation results demonstrate that the proposed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  2. Development of camera technology for monitoring nests. Chapter 15

    Science.gov (United States)

    W. Andrew Cox; M. Shane Pruett; Thomas J. Benson; Scott J. Chiavacci; Frank R., III Thompson

    2012-01-01

    Photo and video technology has become increasingly useful in the study of avian nesting ecology. However, researchers interested in using camera systems are often faced with insufficient information on the types and relative advantages of available technologies. We reviewed the literature for studies of nests that used cameras and summarized them based on study...

  3. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Directory of Open Access Journals (Sweden)

    Chang Yao-Jen

    2002-01-01

    Full Text Available This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network and the H.324 WAN (wide-area network users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  4. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Science.gov (United States)

    Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting

    2002-12-01

    This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  5. Homography-based multiple-camera person-tracking

    Science.gov (United States)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of

  6. Digital video recording and archiving in ophthalmic surgery

    Directory of Open Access Journals (Sweden)

    Raju Biju

    2006-01-01

    Full Text Available Currently most ophthalmic operating rooms are equipped with an analog video recording system [analog Charge Couple Device camera for video grabbing and a Video Cassette Recorder for recording]. We discuss the various advantages of a digital video capture device, its archiving capabilities and our experience during the transition from analog to digital video recording and archiving. The basic terminology and concepts related to analog and digital video, along with the choice of hardware, software and formats for archiving are discussed.

  7. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  8. Intelligent video surveillance systems and technology

    CERN Document Server

    Ma, Yunqian

    2009-01-01

    From the streets of London to subway stations in New York City, hundreds of thousands of surveillance cameras ubiquitously collect hundreds of thousands of videos, often running 24/7. How can such vast volumes of video data be stored, analyzed, indexed, and searched? How can advanced video analysis and systems autonomously recognize people and detect targeted activities real-time? Collating and presenting the latest information Intelligent Video Surveillance: Systems and Technology explores these issues, from fundamentals principle to algorithmic design and system implementation.An Integrated

  9. Iterative Multiview Side Information for Enhanced Reconstruction in Distributed Video Coding

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Distributed video coding (DVC is a new paradigm for video compression based on the information theoretical results of Slepian and Wolf (SW and Wyner and Ziv (WZ. DVC entails low-complexity encoders as well as separate encoding of correlated video sources. This is particularly attractive for multiview camera systems in video surveillance and camera sensor network applications, where low complexity is required at the encoder. In addition, the separate encoding of the sources implies no communication between the cameras in a practical scenario. This is an advantage since communication is time and power consuming and requires complex networking. In this work, different intercamera estimation techniques for side information (SI generation are explored and compared in terms of estimating quality, complexity, and rate distortion (RD performance. Further, a technique called iterative multiview side information (IMSI is introduced, where the final SI is used in an iterative reconstruction process. The simulation results show that IMSI significantly improves the RD performance for video with significant motion and activity. Furthermore, DVC outperforms AVC/H.264 Intra for video with average and low motion but it is still inferior to the Inter No Motion and Inter Motion modes.

  10. Advanced digital video surveillance for safeguard and physical protection

    International Nuclear Information System (INIS)

    Kumar, R.

    2002-01-01

    Full text: Video surveillance is a very crucial component in safeguard and physical protection. Digital technology has revolutionized the surveillance scenario and brought in various new capabilities like better image quality, faster search and retrieval of video images, less storage space for recording, efficient transmission and storage of video, better protection of recorded video images, and easy remote accesses to live and recorded video etc. The basic safeguard requirement for verifiably uninterrupted surveillance has remained largely unchanged since its inception. However, changes to the inspection paradigm to admit automated review and remote monitoring have dramatically increased the demands on safeguard surveillance system. Today's safeguard systems can incorporate intelligent motion detection with very low rate of false alarm and less archiving volume, embedded image processing capability for object behavior and event based indexing, object recognition, efficient querying and report generation etc. It also demands cryptographically authenticating, encrypted, and highly compressed video data for efficient, secure, tamper indicating and transmission. In physical protection, intelligent on robust video motion detection, real time moving object detection and tracking from stationary and moving camera platform, multi-camera cooperative tracking, activity detection and recognition, human motion analysis etc. is going to play a key rote in perimeter security. Incorporation of front and video imagery exploitation tools like automatic number plate recognition, vehicle identification and classification, vehicle undercarriage inspection, face recognition, iris recognition and other biometric tools, gesture recognition etc. makes personnel and vehicle access control robust and foolproof. Innovative digital image enhancement techniques coupled with novel sensor design makes low cost, omni-directional vision capable, all weather, day night surveillance a reality

  11. Video Analytics for Business Intelligence

    CERN Document Server

    Porikli, Fatih; Xiang, Tao; Gong, Shaogang

    2012-01-01

    Closed Circuit TeleVision (CCTV) cameras have been increasingly deployed pervasively in public spaces including retail centres and shopping malls. Intelligent video analytics aims to automatically analyze content of massive amount of public space video data and has been one of the most active areas of computer vision research in the last two decades. Current focus of video analytics research has been largely on detecting alarm events and abnormal behaviours for public safety and security applications. However, increasingly CCTV installations have also been exploited for gathering and analyzing business intelligence information, in order to enhance marketing and operational efficiency. For example, in retail environments, surveillance cameras can be utilised to collect statistical information about shopping behaviour and preference for marketing (e.g., how many people entered a shop; how many females/males or which age groups of people showed interests to a particular product; how long did they stay in the sho...

  12. Video Conference System that Keeps Mutual Eye Contact Among Participants

    Directory of Open Access Journals (Sweden)

    Masahiko Yahagi

    2011-10-01

    Full Text Available A novel video conference system is developed. Suppose that three people A, B, and C attend the video conference, the proposed system enables eye contact among every pair. Furthermore, when B and C chat, A feels as if B and C were facing each other (eye contact seems to be kept among B and C. In the case of a triangle video conference, the respective video system is composed of a half mirror, two video cameras, and two monitors. Each participant watches other participants' images that are reflected by the half mirror. Cameras are set behind the half mirror. Since participants' image (face and the camera position are adjusted to be the same direction, eye contact is kept and conversation becomes very natural compared with conventional video conference systems where participants' eyes do not point to the other participant. When 3 participants sit at the vertex of an equilateral triangle, eyes can be kept even for the situation mentioned above (eye contact between B and C from the aspect of A. Eye contact can be kept not only for 2 or 3 participants but also any number of participants as far as they sit at the vertex of a regular polygon.

  13. New system for linear accelerator radiosurgery with a gantry-mounted video camera

    International Nuclear Information System (INIS)

    Kunieda, Etsuo; Kitamura, Masayuki; Kawaguchi, Osamu; Ohira, Takayuki; Ogawa, Kouichi; Ando, Yutaka; Nakamura, Kayoko; Kubo, Atsushi

    1998-01-01

    Purpose: We developed a positioning method that does not depend on the positioning mechanism originally annexed to the linac and investigated the positioning errors of the system. Methods and Materials: A small video camera was placed at a location optically identical to the linac x-ray source. A target pointer comprising a convex lens and bull's eye was attached to the arc of the Leksell stereotactic system so that the lens would form a virtual image of the bull's eye (virtual target) at the position of the center of the arc. The linac gantry and target pointer were placed at the side and top to adjust the arc center to the isocenter by referring the virtual target. Coincidence of the target and the isocenter could be confirmed in any combination of the couch and gantry rotation. In order to evaluate the accuracy of the positioning, a tungsten ball was attached to the stereotactic frame as a simulated target, which was repeatedly localized and repositioned to estimate the magnitude of the error. The center of the circular field defined by the collimator was marked on the film. Results: The differences between the marked centers of the circular field and the centers of the shadow of the simulated target were less than 0.3 mm

  14. Streaming Audio and Video: New Challenges and Opportunities for Museums.

    Science.gov (United States)

    Spadaccini, Jim

    Streaming audio and video present new challenges and opportunities for museums. Streaming media is easier to author and deliver to Internet audiences than ever before; digital video editing is commonplace now that the tools--computers, digital video cameras, and hard drives--are so affordable; the cost of serving video files across the Internet…

  15. Integrating IPix immersive video surveillance with unattended and remote monitoring (UNARM) systems

    International Nuclear Information System (INIS)

    Michel, K.D.; Klosterbuer, S.F.; Langner, D.C.

    2004-01-01

    Commercially available IPix cameras and software are being researched as a means by which an inspector can be virtually immersed into a nuclear facility. A single IPix camera can provide 360 by 180 degree views with full pan-tilt-zoom capability, and with no moving parts on the camera mount. Immersive video technology can be merged into the current Unattended and Remote Monitoring (UNARM) system, thereby providing an integrated system of monitoring capabilities that tie together radiation, video, isotopic analysis, Global Positioning System (GPS), etc. The integration of the immersive video capability with other monitoring methods already in place provides a significantly enhanced situational awareness to the International Atomic Energy Agency (IAEA) inspectors.

  16. Video-based Mobile Mapping System Using Smartphones

    Science.gov (United States)

    Al-Hamad, A.; Moussa, A.; El-Sheimy, N.

    2014-11-01

    The last two decades have witnessed a huge growth in the demand for geo-spatial data. This demand has encouraged researchers around the world to develop new algorithms and design new mapping systems in order to obtain reliable sources for geo-spatial data. Mobile Mapping Systems (MMS) are one of the main sources for mapping and Geographic Information Systems (GIS) data. MMS integrate various remote sensing sensors, such as cameras and LiDAR, along with navigation sensors to provide the 3D coordinates of points of interest from moving platform (e.g. cars, air planes, etc.). Although MMS can provide accurate mapping solution for different GIS applications, the cost of these systems is not affordable for many users and only large scale companies and institutions can benefits from MMS systems. The main objective of this paper is to propose a new low cost MMS with reasonable accuracy using the available sensors in smartphones and its video camera. Using the smartphone video camera, instead of capturing individual images, makes the system easier to be used by non-professional users since the system will automatically extract the highly overlapping frames out of the video without the user intervention. Results of the proposed system are presented which demonstrate the effect of the number of the used images in mapping solution. In addition, the accuracy of the mapping results obtained from capturing a video is compared to the same results obtained from using separate captured images instead of video.

  17. Video game performances are preserved in ADHD children compared with controls.

    Science.gov (United States)

    Bioulac, Stéphanie; Lallemand, Stéphanie; Fabrigoule, Colette; Thoumy, Anne-Laure; Philip, Pierre; Bouvard, Manuel Pierre

    2014-08-01

    Although ADHD and excessive video game playing have received some attention, few studies have explored the performances of ADHD children when playing video games. The authors hypothesized that performances of ADHD children would be as good as those of control children in motivating video games tasks but not in the Continuous Performance Test II (CPT II). The sample consisted of 26 ADHD children and 16 control children. Performances of ADHD and control children were compared on three commercially available games, on the repetition of every game, and on the CPT II. ADHD children had lower performances on the CPT II than did controls, but they exhibited equivalent performances to controls when playing video games at both sessions and on all three games. When playing video games, ADHD children present no difference in inhibitory performances compared with control children. This demonstrates that cognitive difficulties in ADHD are task dependent. © 2012 SAGE Publications.

  18. Real-Time Acquisition of High Quality Face Sequences from an Active Pan-Tilt-Zoom Camera

    DEFF Research Database (Denmark)

    Haque, Mohammad A.; Nasrollahi, Kamal; Moeslund, Thomas B.

    2013-01-01

    -based real-time high-quality face image acquisition system, which utilizes pan-tilt-zoom parameters of a camera to focus on a human face in a scene and employs a face quality assessment method to log the best quality faces from the captured frames. The system consists of four modules: face detection, camera...... control, face tracking, and face quality assessment before logging. Experimental results show that the proposed system can effectively log the high quality faces from the active camera in real-time (an average of 61.74ms was spent per frame) with an accuracy of 85.27% compared to human annotated data.......Traditional still camera-based facial image acquisition systems in surveillance applications produce low quality face images. This is mainly due to the distance between the camera and subjects of interest. Furthermore, people in such videos usually move around, change their head poses, and facial...

  19. Video Conferencing for a Virtual Seminar Room

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Fosgerau, A.; Hansen, Peter Søren K.

    2002-01-01

    A PC-based video conferencing system for a virtual seminar room is presented. The platform is enhanced with DSPs for audio and video coding and processing. A microphone array is used to facilitate audio based speaker tracking, which is used for adaptive beam-forming and automatic camera...

  20. Video Classification and Adaptive QoP/QoS Control for Multiresolution Video Applications on IPTV

    Directory of Open Access Journals (Sweden)

    Huang Shyh-Fang

    2012-01-01

    Full Text Available With the development of heterogeneous networks and video coding standards, multiresolution video applications over networks become important. It is critical to ensure the service quality of the network for time-sensitive video services. Worldwide Interoperability for Microwave Access (WIMAX is a good candidate for delivering video signals because through WIMAX the delivery quality based on the quality-of-service (QoS setting can be guaranteed. The selection of suitable QoS parameters is, however, not trivial for service users. Instead, what a video service user really concerns with is the video quality of presentation (QoP which includes the video resolution, the fidelity, and the frame rate. In this paper, we present a quality control mechanism in multiresolution video coding structures over WIMAX networks and also investigate the relationship between QoP and QoS in end-to-end connections. Consequently, the video presentation quality can be simply mapped to the network requirements by a mapping table, and then the end-to-end QoS is achieved. We performed experiments with multiresolution MPEG coding over WIMAX networks. In addition to the QoP parameters, the video characteristics, such as, the picture activity and the video mobility, also affect the QoS significantly.

  1. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    Directory of Open Access Journals (Sweden)

    Steven Nicholas Graves, MA

    2015-02-01

    Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  2. High-Speed Video Analysis in a Conceptual Physics Class

    Science.gov (United States)

    Desbien, Dwain M.

    2011-09-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.

  3. Jellyfish support high energy intake of leatherback sea turtles (Dermochelys coriacea: video evidence from animal-borne cameras.

    Directory of Open Access Journals (Sweden)

    Susan G Heaslip

    Full Text Available The endangered leatherback turtle is a large, highly migratory marine predator that inexplicably relies upon a diet of low-energy gelatinous zooplankton. The location of these prey may be predictable at large oceanographic scales, given that leatherback turtles perform long distance migrations (1000s of km from nesting beaches to high latitude foraging grounds. However, little is known about the profitability of this migration and foraging strategy. We used GPS location data and video from animal-borne cameras to examine how prey characteristics (i.e., prey size, prey type, prey encounter rate correlate with the daytime foraging behavior of leatherbacks (n = 19 in shelf waters off Cape Breton Island, NS, Canada, during August and September. Video was recorded continuously, averaged 1:53 h per turtle (range 0:08-3:38 h, and documented a total of 601 prey captures. Lion's mane jellyfish (Cyanea capillata was the dominant prey (83-100%, but moon jellyfish (Aurelia aurita were also consumed. Turtles approached and attacked most jellyfish within the camera's field of view and appeared to consume prey completely. There was no significant relationship between encounter rate and dive duration (p = 0.74, linear mixed-effects models. Handling time increased with prey size regardless of prey species (p = 0.0001. Estimates of energy intake averaged 66,018 kJ • d(-1 but were as high as 167,797 kJ • d(-1 corresponding to turtles consuming an average of 330 kg wet mass • d(-1 (up to 840 kg • d(-1 or approximately 261 (up to 664 jellyfish • d(-1. Assuming our turtles averaged 455 kg body mass, they consumed an average of 73% of their body mass • d(-1 equating to an average energy intake of 3-7 times their daily metabolic requirements, depending on estimates used. This study provides evidence that feeding tactics used by leatherbacks in Atlantic Canadian waters are highly profitable and our results are consistent with estimates of mass gain prior to

  4. Observations of sea-ice conditions in the Antarctic coastal region using ship-board video cameras

    Directory of Open Access Journals (Sweden)

    Haruhito Shimoda

    1997-03-01

    Full Text Available During the 30th, 31st, and 32nd Japanese Antarctic Research Expeditions (JARE-30,JARE-31,and JARE-32, sea-ice conditions were recorded by video camera on board the SHIRASE. Then, the sea-ice images were used to estimate compactness and thickness quantitatively. Analyzed legs are those toward Breid Bay and from Breid Bay to Syowa Station during JARE-30 and JARE-31,and those toward the Prince Olav Coast, from the Prince Olav Coast to Breid Bay, and from Breid Bay to Syowa Station during JARE-32. The results show yearly variations of ice compactness and thickness, latitudinal variations of thickness, and differences in thickness histograms between JARE-30 and JARE-32 in Lutzow-Holm Bay. Albedo values were measured simultaneously by a shortwave radiometer. These values are proportional to those of ice compactness. Finally, we examined the relationship between ice compactness and vertical gradient of air temperature above sea ice.

  5. Novel computer-based endoscopic camera

    Science.gov (United States)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  6. Quality control of plane and tomographic gamma cameras

    International Nuclear Information System (INIS)

    Moretti, J.L.; Roussi, A.

    1993-01-01

    In this article, the authors present different methods of gamma camera quality control in matters of uniformity, spatial resolution, spatial linearity, sensitivity, energy resolution, counting rate performance, SPECT parameters. The authors refer mainly to NEMA standards. 14 figs., 8 tabs

  7. Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system

    Science.gov (United States)

    Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2010-05-01

    Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.

  8. Performance and quality control of scintillation cameras

    International Nuclear Information System (INIS)

    Moretti, J.L.; Iachetti, D.

    1983-01-01

    Acceptance testing, quality and control assurance of gamma-cameras are a part of diagnostic quality in clinical practice. Several parameters are required to achieve a good diagnostic reliability: intrinsic spatial resolution, spatial linearity, uniformities, energy resolution, count-rate characteristics, multiple window spatial analysis. Each parameter was measured and also estimated by a test easy to implement in routine practice. Material required was a 4028 multichannel analyzer linked to a microcomputeur, mini-computers and a set of phantoms (parallel slits, diffusing phantom, orthogonal hole transmission pattern). Gamma-cameras on study were:CGR 3400, CGR 3420, G.E.4000. Siemens ZLC 75 and large field Philips. Several tests proposed by N.E.M.A. and W.H.O. have to be improved concerning too punctual spatial determinations during distortion measurements with multiple window. Contrast control of image need to be monitored with high counting rate. This study shows the need to avoid punctual determinations and the interest to give sets of values of the same parameter on the whole field and to report mean values with their standard variation [fr

  9. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  10. Automatic multi-camera calibration for deployable positioning systems

    Science.gov (United States)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  11. Face Recognition and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Swapnil Vitthal Tathe

    2017-07-01

    Full Text Available Advancement in computer vision technology and availability of video capturing devices such as surveillance cameras has evoked new video processing applications. The research in video face recognition is mostly biased towards law enforcement applications. Applications involves human recognition based on face and iris, human computer interaction, behavior analysis, video surveillance etc. This paper presents face tracking framework that is capable of face detection using Haar features, recognition using Gabor feature extraction, matching using correlation score and tracking using Kalman filter. The method has good recognition rate for real-life videos and robust performance to changes due to illumination, environmental factors, scale, pose and orientations.

  12. Script Design for Information Film and Video.

    Science.gov (United States)

    Shelton, S. M. (Marty); And Others

    1993-01-01

    Shows how the empathy created in the audience by each of the five genres of film/video is a function of the five elements of film design: camera angle, close up, composition, continuity, and cutting. Discusses film/video script designing. Illustrates these concepts with a sample script and story board. (SR)

  13. Implementation of an image acquisition and processing system based on FlexRIO, CameraLink and areaDetector

    Energy Technology Data Exchange (ETDEWEB)

    Esquembri, S.; Ruiz, M. [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Barrera, E., E-mail: eduardo.barrera@upm.es [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Sanz, D.; Bustos, A. [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Castro, R.; Vega, J. [National Fusion Laboratory, CIEMAT, Madrid (Spain)

    2016-11-15

    Highlights: • The system presented acquires and process images from any CameraLink compliant camera. • The frame grabber implanted with FlexRIO technology have image time stamping and preprocessing capabilities. • The system is integrated into EPICS using areaDetector for a flexible configuration of image the acquisition and processing chain. • Is fully compatible with the architecture of the ITER Fast Controllers. - Abstract: Image processing systems are commonly used in current physics experiments, such as nuclear fusion experiments. These experiments usually require multiple cameras with different resolutions, framerates and, frequently, different software drivers. The integration of heterogeneous types of cameras without a unified hardware and software interface increases the complexity of the acquisition system. This paper presents the implementation of a distributed image acquisition and processing system for CameraLink cameras. This system implements a camera frame grabber using Field Programmable Gate Arrays (FPGAs), a reconfigurable hardware platform that allows for image acquisition and real-time preprocessing. The frame grabber is integrated into Experimental Physics and Industrial Control System (EPICS) using the areaDetector EPICS software module, which offers a common interface shared among tens of cameras to configure the image acquisition and process these images in a distributed control system. The use of areaDetector also allows the image processing to be parallelized and concatenated using: multiple computers; areaDetector plugins; and the areaDetector standard type for data, NDArrays. The architecture developed is fully compatible with ITER Fast Controllers and the entire system has been validated using a camera hardware simulator that stream videos from fusion experiment databases.

  14. Measuring frequency of one-dimensional vibration with video camera using electronic rolling shutter

    Science.gov (United States)

    Zhao, Yipeng; Liu, Jinyue; Guo, Shijie; Li, Tiejun

    2018-04-01

    Cameras offer a unique capability of collecting high density spatial data from a distant scene of interest. They can be employed as remote monitoring or inspection sensors to measure vibrating objects because of their commonplace availability, simplicity, and potentially low cost. A defect of vibrating measurement with the camera is to process the massive data generated by camera. In order to reduce the data collected from the camera, the camera using electronic rolling shutter (ERS) is applied to measure the frequency of one-dimensional vibration, whose frequency is much higher than the speed of the camera. Every row in the image captured by the ERS camera records the vibrating displacement at different times. Those displacements that form the vibration could be extracted by local analysis with sliding windows. This methodology is demonstrated on vibrating structures, a cantilever beam, and an air compressor to identify the validity of the proposed algorithm. Suggestions for applications of this methodology and challenges in real-world implementation are given at last.

  15. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    Directory of Open Access Journals (Sweden)

    Sergio Orts-Escolano

    2014-04-01

    Full Text Available In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units. It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

  16. Full image-processing pipeline in field-programmable gate array for a small endoscopic camera

    Science.gov (United States)

    Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.

    2017-01-01

    Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.

  17. FPS camera sync and reset chassis

    International Nuclear Information System (INIS)

    Yates, G.J.

    1980-06-01

    The sync and reset chassis provides all the circuitry required to synchronize an event to be studied, a remote free-running focus projection and scanning (FPS) data-acquisition TV camera, and a video signal recording system. The functions, design, and operation of this chassis are described in detail

  18. Illusory control, gambling, and video gaming: an investigation of regular gamblers and video game players.

    Science.gov (United States)

    King, Daniel L; Ejova, Anastasia; Delfabbro, Paul H

    2012-09-01

    There is a paucity of empirical research examining the possible association between gambling and video game play. In two studies, we examined the association between video game playing, erroneous gambling cognitions, and risky gambling behaviour. One hundred and fifteen participants, including 65 electronic gambling machine (EGM) players and 50 regular video game players, were administered a questionnaire that examined video game play, gambling involvement, problem gambling, and beliefs about gambling. We then assessed each groups' performance on a computerised gambling task that involved real money. A post-game survey examined perceptions of the skill and chance involved in the gambling task. The results showed that video game playing itself was not significantly associated with gambling involvement or problem gambling status. However, among those persons who both gambled and played video games, video game playing was uniquely and significantly positively associated with the perception of direct control over chance-based gambling events. Further research is needed to better understand the nature of this association, as it may assist in understanding the impact of emerging digital gambling technologies.

  19. Continuous Learning of a Multilayered Network Topology in a Video Camera Network

    Directory of Open Access Journals (Sweden)

    Zou Xiaotao

    2009-01-01

    Full Text Available Abstract A multilayered camera network architecture with nodes as entry/exit points, cameras, and clusters of cameras at different layers is proposed. Unlike existing methods that used discrete events or appearance information to infer the network topology at a single level, this paper integrates face recognition that provides robustness to appearance changes and better models the time-varying traffic patterns in the network. The statistical dependence between the nodes, indicating the connectivity and traffic patterns of the camera network, is represented by a weighted directed graph and transition times that may have multimodal distributions. The traffic patterns and the network topology may be changing in the dynamic environment. We propose a Monte Carlo Expectation-Maximization algorithm-based continuous learning mechanism to capture the latent dynamically changing characteristics of the network topology. In the experiments, a nine-camera network with twenty-five nodes (at the lowest level is analyzed both in simulation and in real-life experiments and compared with previous approaches.

  20. Continuous Learning of a Multilayered Network Topology in a Video Camera Network

    Directory of Open Access Journals (Sweden)

    Xiaotao Zou

    2009-01-01

    Full Text Available A multilayered camera network architecture with nodes as entry/exit points, cameras, and clusters of cameras at different layers is proposed. Unlike existing methods that used discrete events or appearance information to infer the network topology at a single level, this paper integrates face recognition that provides robustness to appearance changes and better models the time-varying traffic patterns in the network. The statistical dependence between the nodes, indicating the connectivity and traffic patterns of the camera network, is represented by a weighted directed graph and transition times that may have multimodal distributions. The traffic patterns and the network topology may be changing in the dynamic environment. We propose a Monte Carlo Expectation-Maximization algorithm-based continuous learning mechanism to capture the latent dynamically changing characteristics of the network topology. In the experiments, a nine-camera network with twenty-five nodes (at the lowest level is analyzed both in simulation and in real-life experiments and compared with previous approaches.

  1. Utilizing social media and video games to control #DIY microscopes

    Directory of Open Access Journals (Sweden)

    Maxime Leblanc-Latour

    2017-12-01

    Full Text Available Open-source lab equipment is becoming more widespread with the popularization of fabrication tools such as 3D printers, laser cutters, CNC machines, open source microcontrollers and open source software. Although many pieces of common laboratory equipment have been developed, software control of these items is sometimes lacking. Specifically, control software that can be easily implemented and enable user-input and control over multiple platforms (PC, smartphone, web, etc.. The aim of this proof-of principle study was to develop and implement software for the control of a low-cost, 3D printed microscope. Here, we present two approaches which enable microscope control by exploiting the functionality of the social media platform Twitter or player actions inside of the videogame Minecraft. The microscope was constructed from a modified web-camera and implemented on a Raspberry Pi computer. Three aspects of microscope control were tested, including single image capture, focus control and time-lapse imaging. The Twitter embodiment enabled users to send ‘tweets’ directly to the microscope. Image data acquired by the microscope was then returned to the user through a Twitter reply and stored permanently on the photo-sharing platform Flickr, along with any relevant metadata. Local control of the microscope was also implemented by utilizing the video game Minecraft, in situations where Internet connectivity is not present or stable. A virtual laboratory was constructed inside the Minecraft world and player actions inside the laboratory were linked to specific microscope functions. Here, we present the methodology and results of these experiments and discuss possible limitations and future extensions of this work.

  2. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    Science.gov (United States)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  3. Progress in passive submillimeter-wave video imaging

    Science.gov (United States)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Peiselt, Katja; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Bauer, Frank; Meyer, Hans-Georg

    2014-06-01

    Since 2007 we are developing passive submillimeter-wave video cameras for personal security screening. In contradiction to established portal-based millimeter-wave scanning techniques, these are suitable for stand-off or stealth operation. The cameras operate in the 350GHz band and use arrays of superconducting transition-edge sensors (TES), reflector optics, and opto-mechanical scanners. Whereas the basic principle of these devices remains unchanged, there has been a continuous development of the technical details, as the detector array, the scanning scheme, and the readout, as well as system integration and performance. The latest prototype of this camera development features a linear array of 128 detectors and a linear scanner capable of 25Hz frame rate. Using different types of reflector optics, a field of view of 1×2m2 and a spatial resolution of 1-2 cm is provided at object distances of about 5-25m. We present the concept of this camera and give details on system design and performance. Demonstration videos show its capability for hidden threat detection and illustrate possible application scenarios.

  4. Realization of a gamma emission tomography by a servo-controlled camera and bed

    International Nuclear Information System (INIS)

    Parmentier, M.; Gunzman, D.; Bidet, R.

    1979-01-01

    A gamma-camera and a whole-body bed were connected to a minicomputer which controlled automatically their movements. By combining horizontal displacement of the bed with vertical displacement and rotation of the camera we were able to obtain the equivalent of camera rotation around the bed. This method provides an inexpensive way of realizing gamma emission tomography [fr

  5. Rate-control algorithms testing by using video source model

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna

    2008-01-01

    In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....

  6. On the development of new SPMN diurnal video systems for daylight fireball monitoring

    Science.gov (United States)

    Madiedo, J. M.; Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.

    2008-09-01

    Daylight fireball video monitoring High-sensitivity video devices are commonly used for the study of the activity of meteor streams during the night. These provide useful data for the determination, for instance, of radiant, orbital and photometric parameters ([1] to [7]). With this aim, during 2006 three automated video stations supported by Universidad de Huelva were set up in Andalusia within the framework of the SPanish Meteor Network (SPMN). These are endowed with 8-9 high sensitivity wide-field video cameras that achieve a meteor limiting magnitude of about +3. These stations have increased the coverage performed by the low-scan allsky CCD systems operated by the SPMN and, besides, achieve a time accuracy of about 0.01s for determining the appearance of meteor and fireball events. Despite of these nocturnal monitoring efforts, we realised the need of setting up stations for daylight fireball detection. Such effort was also motivated by the appearance of the two recent meteorite-dropping events of Villalbeto de la Peña [8,9] and Puerto Lápice [10]. Although the Villalbeto de la Peña event was casually videotaped, and photographed, no direct pictures or videos were obtained for the Puerto Lápice event. Consequently, in order to perform a continuous recording of daylight fireball events, we setup new automated systems based on CCD video cameras. However, the development of these video stations implies several issues with respect to nocturnal systems that must be properly solved in order to get an optimal operation. The first of these video stations, also supported by University of Huelva, has been setup in Sevilla (Andalusia) during May 2007. But, of course, fireball association is unequivocal only in those cases when two or more stations recorded the fireball, and when consequently the geocentric radiant is accurately determined. With this aim, a second diurnal video station is being setup in Andalusia in the facilities of Centro Internacional de Estudios y

  7. REAL TIME SPEED ESTIMATION FROM MONOCULAR VIDEO

    Directory of Open Access Journals (Sweden)

    M. S. Temiz

    2012-07-01

    Full Text Available In this paper, detailed studies have been performed for developing a real time system to be used for surveillance of the traffic flow by using monocular video cameras to find speeds of the vehicles for secure travelling are presented. We assume that the studied road segment is planar and straight, the camera is tilted downward a bridge and the length of one line segment in the image is known. In order to estimate the speed of a moving vehicle from a video camera, rectification of video images is performed to eliminate the perspective effects and then the interest region namely the ROI is determined for tracking the vehicles. Velocity vectors of a sufficient number of reference points are identified on the image of the vehicle from each video frame. For this purpose sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in the image space are transformed to the object space to find the absolute values of these magnitudes. The accuracy of the estimated speed is approximately ±1 – 2 km/h. In order to solve the real time speed estimation problem, the authors have written a software system in C++ programming language. This software system has been used for all of the computations and test applications.

  8. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  9. Modeling 3D Unknown object by Range Finder and Video Camera ...

    African Journals Online (AJOL)

    real world); proprioceptive and exteroceptive sensors allowing the recreating of the 3D geometric database of an environment (virtual world). The virtual world is projected onto a video display terminal (VDT). Computer-generated and video ...

  10. PEOPLE REIDENTIFCATION IN A DISTRIBUTED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    Icaro Oliveira de Oliveira

    2010-06-01

    Full Text Available This paper presents an approach to the object reidentification problem in a distributed camera network system. The reidentification or reacquisition problem consists essentially on the matching process of images acquired from different cameras. This work is applied in a monitored environment by cameras. This application is important to modern security systems, in which the targets presence identification in the environment expands the capacity of action by security agents in real time and provides important parameters like localization for each target. We used target’s interest points and target’s color with features for reidentification. The satisfactory results were obtained from real experiments in public video datasets and synthetic images with noise.

  11. HDR video synthesis for vision systems in dynamic scenes

    Science.gov (United States)

    Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried

    2016-09-01

    High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.

  12. Video-assisted functional assessment of index pollicisation in congenital anomalies.

    Science.gov (United States)

    Mas, Virginie; Ilharreborde, Brice; Mallet, Cindy; Mazda, Keyvan; Simon, Anne-Laure; Jehanno, Pascal

    2016-08-01

    Functional results of index pollicisation are usually assessed by the clinical score of Percival. This score is based on elementary hand movements and does not reflect the function of the neo thumb in daily life activities. The aim of this study was to develop a new video-assisted scoring system based on daily life activities to assess index pollicisation functional outcomes. Twenty-two consecutive children, operated between 1998 and 2012, were examined with a mean of 77 months after surgery. The mean age at surgery was 34 months. Post-operative results were evaluated by a new video-assisted 14-point scoring system consisting of seven basic tasks that are frequently used in daily activities. The series of tasks was performed both on the request of the examiner and in real-life conditions with the use of a hidden camera. Each video recording was examined by three different examiners. Each examiner rated the video recordings three times, with an interval of one week between examinations. Inter- and intra-observer agreements were calculated. Inter- and intra-observer agreements were excellent both on request (κ = 0.87 [0.84-0.97] for inter-observer agreement and 0.92 [0.82-0.98] for intra-observer agreement) and on hidden camera (κ = 0.83 [0.78-0.91] for inter-observer agreement and 0.89 [0.83-0.96] for intra-observer agreement). The results were significantly better on request than on hidden camera (p = 0.045). The correlation between the video-assisted scoring system and the Percival score was poor. The video-assisted scoring system is a reliable tool to assess index pollicisation functional outcomes. The scoring system on hidden camera is more representative of the neo thumb use in daily life complex movements. Level IV.

  13. Video systems for alarm assessment

    International Nuclear Information System (INIS)

    Greenwoll, D.A.; Matter, J.C.; Ebel, P.E.

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs

  14. Video systems for alarm assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greenwoll, D.A.; Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States)); Ebel, P.E. (BE, Inc., Barnwell, SC (United States))

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs.

  15. Effectance and control as determinants of video game enjoyment

    NARCIS (Netherlands)

    Klimmt, C.; Hartmann, T.; Frey, A.

    2007-01-01

    This article explores video game enjoyment originated by games' key characteristic, interactivity. An online experiment (N = 500) tested experiences of effectance (perceived influence on the game world) and of being in control as mechanisms that link interactivity to enjoyment. A video game was

  16. The Use of Camera Traps in Wildlife

    Directory of Open Access Journals (Sweden)

    Yasin Uçarlı

    2013-11-01

    Full Text Available Camera traps are increasingly used in the abundance and density estimates of wildlife species. Camera traps are very good alternative for direct observation in case, particularly, steep terrain, dense vegetation covered areas or nocturnal species. The main reason for the use of camera traps is eliminated that the economic, personnel and time loss in a continuous manner at the same time in different points. Camera traps, motion and heat sensitive, can take a photo or video according to the models. Crossover points and feeding or mating areas of the focal species are addressed as a priority camera trap set locations. The population size can be finding out by the images combined with Capture-Recapture methods. The population density came out the population size divided to effective sampling area size. Mating and breeding season, habitat choice, group structures and survival rates of the focal species can be achieved from the images. Camera traps are very useful to obtain the necessary data about the particularly mysterious species with economically in planning and conservation efforts.

  17. Video Field Studies with your Cell Phone

    DEFF Research Database (Denmark)

    Buur, Jacob; Fraser, Euan

    2010-01-01

    Pod? Or with the GoPRO sports camera? Our approach has a strong focus on how to use video in design, rather than on the technical side. The goal is to engage design teams in meaningful discussions based on user empathy, rather than to produce beautiful videos. Basically it is a search for a minimalist way...

  18. Virtual displays for 360-degree video

    Science.gov (United States)

    Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.

    2012-03-01

    In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.

  19. A negative association between video game experience and proactive cognitive control.

    Science.gov (United States)

    Bailey, Kira; West, Robert; Anderson, Craig A

    2010-01-01

    Some evidence demonstrates that video game experience has a beneficial effect on visuospatial cognition. In contrast, other evidence indicates that video game experience may be negatively related to cognitive control. In this study we examined the specificity of the influence of video game experience on cognitive control. Participants with high and low video game experience performed the Stroop task while event-related brain potentials were recorded. The behavioral data revealed no difference between high and low gamers for the Stroop interference effect and a reduction in the conflict adaptation effect in high gamers. The amplitude of the medial frontal negativity and a frontal slow wave was attenuated in high gamers, and there was no effect of gaming status on the conflict slow potential. These data lead to the suggestion that video game experience has a negative influence on proactive, but not reactive, cognitive control.

  20. An alternate way for image documentation in gamma camera processing units

    International Nuclear Information System (INIS)

    Schneider, P.

    1980-01-01

    For documentation of images and curves generated by a gamma camera processing system a film exposure tool from a CT system was linked to the video monitor by use of a resistance bridge. The machine has a stock capacity of 100 plane films. For advantage there is no need for an interface, the complete information on the monitor is transferred to the plane film and compared to software controlled data output on printer or plotter the device is tremendously time saving. (orig.) [de

  1. The modular integrated video system (MIVS)

    International Nuclear Information System (INIS)

    Schneider, S.L.; Sonnier, C.S.

    1987-01-01

    The Modular Integrated Video System (MIVS) is being developed for the International Atomic Energy Agency (IAEA) for use in facilities where mains power is available and the separation of the Camera and Recording Control Unit is desirable. The system is being developed under the US Program for Technical Assistance to the IAEA Safeguards (POTAS). The MIVS is designed to be a user-friendly system, allowing operation with minimal effort and training. The system software, through the use of a Liquid Crystal Display (LCD) and four soft keys, leads the inspector through the setup procedures to accomplish the intended surveillance or maintenance task. Review of surveillance data is accomplished with the use of a Portable Review Station. This Review Station will aid the inspector in the review process and determine the number of missed video scenes during a surveillance period

  2. Spatiotemporal video deinterlacing using control grid interpolation

    Science.gov (United States)

    Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin

    2015-03-01

    With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.

  3. Plant iodine-131 uptake in relation to root concentration as measured in minirhizotron by video camera:

    International Nuclear Information System (INIS)

    Moss, K.J.

    1990-09-01

    Glass viewing tubes (minirhizotrons) were placed in the soil beneath native perennial bunchgrass (Agropyron spicatum). The tubes provided access for observing and quantifying plant roots with a miniature video camera and soil moisture estimates by neutron hydroprobe. The radiotracer I-131 was delivered to the root zone at three depths with differing root concentrations. The plant was subsequently sampled and analyzed for I-131. Plant uptake was greater when I-131 was applied at soil depths with higher root concentrations. When I-131 was applied at soil depths with lower root concentrations, plant uptake was less. However, the relationship between root concentration and plant uptake was not a direct one. When I-131 was delivered to deeper soil depths with low root concentrations, the quantity of roots there appeared to be less effective in uptake than the same quantity of roots at shallow soil depths with high root concentration. 29 refs., 6 figs., 11 tabs

  4. Privacy information management for video surveillance

    Science.gov (United States)

    Luo, Ying; Cheung, Sen-ching S.

    2013-05-01

    The widespread deployment of surveillance cameras has raised serious privacy concerns. Many privacy-enhancing schemes have been proposed to automatically redact images of trusted individuals in the surveillance video. To identify these individuals for protection, the most reliable approach is to use biometric signals such as iris patterns as they are immutable and highly discriminative. In this paper, we propose a privacy data management system to be used in a privacy-aware video surveillance system. The privacy status of a subject is anonymously determined based on her iris pattern. For a trusted subject, the surveillance video is redacted and the original imagery is considered to be the privacy information. Our proposed system allows a subject to access her privacy information via the same biometric signal for privacy status determination. Two secure protocols, one for privacy information encryption and the other for privacy information retrieval are proposed. Error control coding is used to cope with the variability in iris patterns and efficient implementation is achieved using surrogate data records. Experimental results on a public iris biometric database demonstrate the validity of our framework.

  5. From different angles : Exploring and applying the design potential of video

    NARCIS (Netherlands)

    Pasman, G.J.

    2012-01-01

    Recent developments in both hardware and software have brought video within the scope of design students as a new visual design tool. Being more and more equipped with cameras, for example in their smartphones, and video editing programs on their computers, they are increasing using video to record

  6. Opinion rating of comparison photographs of television pictures from CCD cameras under irradiation

    International Nuclear Information System (INIS)

    Reading, V.M.; Dumbreck, A.A.

    1991-01-01

    As part of the development of a general method of testing the effects of gamma radiation on CCD television cameras, this is a report of an experimental study on the optimisation of still photographic representation of video pictures recorded before and during camera irradiation. (author)

  7. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  8. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  9. A quality control atlas for scintillation camera systems

    International Nuclear Information System (INIS)

    Busemann Sokole, E.; Graham, L.S.; Todd-Pokropek, A.; Wegst, A.; Robilotta, C.C.

    2002-01-01

    Full text: The accurate interpretation of quality control and clinical nuclear medicine image data is coupled to an understanding of image patterns and quantitative results. Understanding is gained by learning from different examples, and knowledge of underlying principles of image production. An Atlas of examples has been created to assist with interpreting quality control tests and recognizing artifacts in clinical examples. The project was initiated and supported by the International Atomic Energy Agency (IAEA). The Atlas was developed and written by Busemann Sokole from image examples submitted from nuclear medicine users from around the world. The descriptive text was written in a consistent format to accompany each image or image set. Each example in the atlas finally consisted of the images; a brief description of the data acquisition, radionuclide/radiopharmaceutical, specific circumstances under which the image was produced; results describing the images and subsequent conclusions; comments, where appropriate, giving guidelines for follow-up strategies and trouble shooting; and occasional literature references. Hardcopy images required digitizing into JPEG format for inclusion into a digital document. Where possible, an example was contained on one page. The atlas was reviewed by an international group of experts. A total of about 250 examples were compiled into 6 sections: planar, SPECT, whole body, camera/computer interface, environment/radioactivity, and display/hardcopy. Subtle loss of image quality may be difficult to detect. SPECT examples, therefore, include simulations demonstrating effects of deterioration in camera performance (e.g. center-of-rotation offset, non-uniformity) or suboptimal clinical performance. The atlas includes normal results, results from poor adjustment of the camera system, poor results obtained at acceptance testing, artifacts due to system malfunction, and artifacts due to environmental situations. Some image patterns are

  10. Radiation-resistant optical sensors and cameras; Strahlungsresistente optische Sensoren und Kameras

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, G. [Imaging and Sensing Technology, Bonn (Germany)

    2008-02-15

    Introducing video technology, i.e. 'TV', specifically in the nuclear field was considered at an early stage. Possibilities to view spaces in nuclear facilities by means of radiation-resistant optical sensors or cameras are presented. These systems are to enable operators to monitor and control visually the processes occurring within such spaces. Camera systems are used, e.g., for remote surveillance of critical components in nuclear power plants and nuclear facilities, and thus contribute also to plant safety. A different application of optical systems resistant to radiation is in the visual inspection of, e.g., reactor pressure vessels and in tracing small parts inside a reactor. Camera systems are also employed in remote disassembly of radioactively contaminated old plants. Unfortunately, the niche market of radiation-resistant camera systems hardly gives rise to the expectation of research funds becoming available for the development of new radiation-resistant optical systems for picture taking and viewing. Current efforts are devoted mainly to improvements of image evaluation and image quality. Other items on the agendas of manufacturers are the reduction in camera size, which is limited by the size of picture tubes, and the increased use of commercial CCD cameras together with adequate shieldings or improved lenses. Consideration is also being given to the use of periphery equipment and to data transmission by LAN, WAN, or Internet links to remote locations. (orig.)

  11. Multi-view video segmentation and tracking for video surveillance

    Science.gov (United States)

    Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj

    2009-05-01

    Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.

  12. Real-time vehicle matching for multi-camera tunnel surveillance

    Science.gov (United States)

    Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried

    2011-03-01

    Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.

  13. A highly sensitive underwater video system for use in turbid aquaculture ponds.

    Science.gov (United States)

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C

    2016-08-24

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds' benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system's high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.

  14. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    Science.gov (United States)

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753

  15. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    Directory of Open Access Journals (Sweden)

    Alvaro Suarez

    2012-02-01

    Full Text Available Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  16. Architecture and protocol of a semantic system designed for video tagging with sensor data in mobile devices.

    Science.gov (United States)

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  17. The Use of Smart Glasses for Surgical Video Streaming.

    Science.gov (United States)

    Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu

    2017-04-01

    Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.

  18. Video Golf

    Science.gov (United States)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  19. Limits on surveillance: frictions, fragilities and failures in the operation of camera surveillance.

    NARCIS (Netherlands)

    Dubbeld, L.

    2004-01-01

    Public video surveillance tends to be discussed in either utopian or dystopian terms: proponents maintain that camera surveillance is the perfect tool in the fight against crime, while critics argue that the use of security cameras is central to the development of a panoptic, Orwellian surveillance

  20. Summarization of Surveillance Video Sequences Using Face Quality Assessment

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.; Rahmati, Mohammad

    2011-01-01

    Constant working surveillance cameras in public places, such as airports and banks, produce huge amount of video data. Faces in such videos can be extracted in real time. However, most of these detected faces are either redundant or useless. Redundant information adds computational costs to facial...

  1. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  2. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  3. Logistical and safeguards aspects related to the introduction and implementation of video surveillance equipment by EURATOM

    International Nuclear Information System (INIS)

    Chare, P.J.; Wagner, H.G.; Otto, P.; Schenkel, R.

    1987-01-01

    With the growing availability of reliable video equipment for surveillance applications in safeguards and the disappearance of the Super 8 mm cameras, there will be a period of transition from film camera to video surveillance, a process which started two years ago. This gradual transition, as the film cameras come to the end of their useful lives, will afford the safeguards authorities the opportunity to examine in detail the logistical and procedural changes necessary. This paper examines, on the basis of existing video equipment in use or under development, the differences and problems to be encountered in the approach to surveillance. These problems include tamper resistance of signal transmission, on site and headquarters review, preventative maintenance, reliability, repair, and overall performance evaluation. In addition the advantages and flexibility offered by the introduction of video, such as on site review and increased memory capacity, are also discussed. The paper also considers the overall costs and manpower required by EURATOM for the implementation of the video systems as compared to the existing twin Minolta film camera system to ensure the most efficient use of manpower and equipment

  4. Video monitoring system for enriched uranium casting furnaces

    International Nuclear Information System (INIS)

    Turner, P.C.

    1978-03-01

    A closed-circuit television (CCTV) system was developed to upgrade the remote-viewing capability on two oralloy (highly enriched uranium) casting furnaces in the Y-12 Plant. A silicon vidicon CCTV camera with a remotely controlled lens and infrared filtering was provided to yield a good-quality video presentation of the furnace crucible as the oralloy material is heated from 25 to 1300 0 C. Existing tube-type CCTV monochrome monitors were replaced with solid-state monitors to increase the system reliability

  5. General Attitude Control Algorithm for Spacecraft Equipped with Star Camera and Reaction Wheels

    DEFF Research Database (Denmark)

    Wisniewski, Rafal; Kulczycki, P.

    A configuration consisting of a star camera, four reaction wheels and magnetorquers for momentum unloading has become standard for many spacecraft missions. This popularity has motivated numerous agencies and private companies to initiate work on the design of an imbedded attitude control system...... realized on an integrated circuit. This paper considers two issues: slew maneuver with a feature of avoiding direct exposure of the camera's CCD chip to the Sun %, three-axis attitude control and optimal control torque distribution in a reaction wheel assembly. The attitude controller is synthesized...

  6. Object Detection and Tracking-Based Camera Calibration for Normalized Human Height Estimation

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-01-01

    Full Text Available This paper presents a normalized human height estimation algorithm using an uncalibrated camera. To estimate the normalized human height, the proposed algorithm detects a moving object and performs tracking-based automatic camera calibration. The proposed method consists of three steps: (i moving human detection and tracking, (ii automatic camera calibration, and (iii human height estimation and error correction. The proposed method automatically calibrates camera by detecting moving humans and estimates the human height using error correction. The proposed method can be applied to object-based video surveillance systems and digital forensic.

  7. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  8. Intelligent Model for Video Survillance Security System

    Directory of Open Access Journals (Sweden)

    J. Vidhya

    2013-12-01

    Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.

  9. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    Science.gov (United States)

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  10. An integrated multispectral video and environmental monitoring system for the study of coastal processes and the support of beach management operations

    Science.gov (United States)

    Ghionis, George; Trygonis, Vassilis; Karydis, Antonis; Vousdoukas, Michalis; Alexandrakis, George; Drakopoulos, Panos; Amdreadis, Olympos; Psarros, Fotis; Velegrakis, Antonis; Poulos, Serafim

    2016-04-01

    Effective beach management requires environmental assessments that are based on sound science, are cost-effective and are available to beach users and managers in an accessible, timely and transparent manner. The most common problems are: 1) The available field data are scarce and of sub-optimal spatio-temporal resolution and coverage, 2) our understanding of local beach processes needs to be improved in order to accurately model/forecast beach dynamics under a changing climate, and 3) the information provided by coastal scientists/engineers in the form of data, models and scientific interpretation is often too complicated to be of direct use by coastal managers/decision makers. A multispectral video system has been developed, consisting of one or more video cameras operating in the visible part of the spectrum, a passive near-infrared (NIR) camera, an active NIR camera system, a thermal infrared camera and a spherical video camera, coupled with innovative image processing algorithms and a telemetric system for the monitoring of coastal environmental parameters. The complete system has the capability to record, process and communicate (in quasi-real time) high frequency information on shoreline position, wave breaking zones, wave run-up, erosion hot spots along the shoreline, nearshore wave height, turbidity, underwater visibility, wind speed and direction, air and sea temperature, solar radiation, UV radiation, relative humidity, barometric pressure and rainfall. An innovative, remotely-controlled interactive visual monitoring system, based on the spherical video camera (with 360°field of view), combines the video streams from all cameras and can be used by beach managers to monitor (in real time) beach user numbers, flow activities and safety at beaches of high touristic value. The high resolution near infrared cameras permit 24-hour monitoring of beach processes, while the thermal camera provides information on beach sediment temperature and moisture, can

  11. Video game addiction in emerging adulthood: Cross-sectional evidence of pathology in video game addicts as compared to matched healthy controls.

    Science.gov (United States)

    Stockdale, Laura; Coyne, Sarah M

    2018-01-01

    The Internet Gaming Disorder Scale (IGDS) is a widely used measure of video game addiction, a pathology affecting a small percentage of all people who play video games. Emerging adult males are significantly more likely to be video game addicts. Few researchers have examined how people who qualify as video game addicts based on the IGDS compared to matched controls based on age, gender, race, and marital status. The current study compared IGDS video game addicts to matched non-addicts in terms of their mental, physical, social-emotional health using self-report, survey methods. Addicts had poorer mental health and cognitive functioning including poorer impulse control and ADHD symptoms compared to controls. Additionally, addicts displayed increased emotional difficulties including increased depression and anxiety, felt more socially isolated, and were more likely to display internet pornography pathological use symptoms. Female video game addicts were at unique risk for negative outcomes. The sample for this study was undergraduate college students and self-report measures were used. Participants who met the IGDS criteria for video game addiction displayed poorer emotional, physical, mental, and social health, adding to the growing evidence that video game addictions are a valid phenomenon. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Acquisition, compression and rendering of depth and texture for multi-view video

    NARCIS (Netherlands)

    Morvan, Y.

    2009-01-01

    Three-dimensional (3D) video and imaging technologies is an emerging trend in the development of digital video systems, as we presently witness the appearance of 3D displays, coding systems, and 3D camera setups. Three-dimensional multi-view video is typically obtained from a set of synchronized

  13. Prevalence of behavior changing strategies in fitness video games: theory-based content analysis.

    Science.gov (United States)

    Lyons, Elizabeth Jane; Hatkevich, Claire

    2013-05-07

    Fitness video games are popular, but little is known about their content. Because many contain interactive tools that mimic behavioral strategies from weight loss intervention programs, it is possible that differences in content could affect player physical activity and/or weight outcomes. There is a need for a better understanding of what behavioral strategies are currently available in fitness games and how they are implemented. The purpose of this study was to investigate the prevalence of evidence-based behavioral strategies across fitness video games available for home use. Games available for consoles that used camera-based controllers were also contrasted with games available for a console that used handheld motion controllers. Fitness games (N=18) available for three home consoles were systematically identified and play-tested by 2 trained coders for at least 3 hours each. In cases of multiple games from one series, only the most recently released game was included. The Sony PlayStation 3 and Microsoft Xbox360 were the two camera-based consoles, and the Nintendo Wii was the handheld motion controller console. A coding list based on a taxonomy of behavioral strategies was used to begin coding. Codes were refined in an iterative process based on data found during play-testing. The most prevalent behavioral strategies were modeling (17/18), specific performance feedback (17/18), reinforcement (16/18), caloric expenditure feedback (15/18), and guided practice (15/18). All games included some kind of feedback on performance accuracy, exercise frequency, and/or fitness progress. Action planning (scheduling future workouts) was the least prevalent of the included strategies (4/18). Twelve games included some kind of social integration, with nine of them providing options for real-time multiplayer sessions. Only two games did not feature any kind of reward. Games for the camera-based consoles (mean 12.89, SD 2.71) included a greater number of strategies than those

  14. Using Photogrammetry to Estimate Tank Waste Volumes from Video

    Energy Technology Data Exchange (ETDEWEB)

    Field, Jim G. [Washington River Protection Solutions, LLC, Richland, WA (United States)

    2013-03-27

    Washington River Protection Solutions (WRPS) contracted with HiLine Engineering & Fabrication, Inc. to assess the accuracy of photogrammetry tools as compared to video Camera/CAD Modeling System (CCMS) estimates. This test report documents the results of using photogrammetry to estimate the volume of waste in tank 241-C-I04 from post-retrieval videos and results using photogrammetry to estimate the volume of waste piles in the CCMS test video.

  15. Using Photogrammetry to Estimate Tank Waste Volumes from Video

    International Nuclear Information System (INIS)

    Field, Jim G.

    2013-01-01

    Washington River Protection Solutions (WRPS) contracted with HiLine Engineering and Fabrication, Inc. to assess the accuracy of photogrammetry tools as compared to video Camera/CAD Modeling System (CCMS) estimates. This test report documents the results of using photogrammetry to estimate the volume of waste in tank 241-C-I04 from post-retrieval videos and results using photogrammetry to estimate the volume of waste piles in the CCMS test video

  16. First results on video meteors from Crete, Greece

    Science.gov (United States)

    Maravelias, G.

    2012-01-01

    This work presents the first systematic video meteor observations from a, forthcoming permanent, station in Crete, Greece, operating as the first official node within the International Meteor Organization's Video Network. It consists of a Watec 902 H2 Ultimate camera equipped with a Panasonic WV-LA1208 (focal length 12mm, f/0.8) lens running MetRec. The system operated for 42 nights during 2011 (August 19-December 30, 2011) recording 1905 meteors. It is significantly more performant than a previous system used by the author during the Perseids 2010 (DMK camera 21AF04.AS by The Imaging Source, CCTV lens of focal length 2.8 mm, UFO Capture v2.22), which operated for 17 nights (August 4-22, 2010) recording 32 meteors. Differences - according to the author's experience - between the two softwares (MetRec, UFO Capture) are discussed along with a small guide to video meteor hardware.

  17. Non-Cooperative Facial Recognition Video Dataset Collection Plan

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Marcia L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Erikson, Rebecca L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lombardo, Nicholas J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-08-31

    The Pacific Northwest National Laboratory (PNNL) will produce a non-cooperative (i.e. not posing for the camera) facial recognition video data set for research purposes to evaluate and enhance facial recognition systems technology. The aggregate data set consists of 1) videos capturing PNNL role players and public volunteers in three key operational settings, 2) photographs of the role players for enrolling in an evaluation database, and 3) ground truth data that documents when the role player is within various camera fields of view. PNNL will deliver the aggregate data set to DHS who may then choose to make it available to other government agencies interested in evaluating and enhancing facial recognition systems. The three operational settings that will be the focus of the video collection effort include: 1) unidirectional crowd flow 2) bi-directional crowd flow, and 3) linear and/or serpentine queues.

  18. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  19. Slew Maneuver Control for Spacecraft Equipped with Star Camera and Reaction Wheels

    DEFF Research Database (Denmark)

    Wisniewski, Rafal; Kulczycki, P.

    2005-01-01

    A configuration consisting of a star camera, four reaction wheels and magnetorquers for momentum unloading has become standard for many spacecraft missions. This popularity has motivated numerous agencies and private companies to initiate work on the design of an imbedded attitude control system...... realized on an integrated circuit. This paper provides an easily implementable control algorithm for this type of configuration. The paper considers two issues: slew maneuver with a feature of avoiding direct exposure of the camera's CCD chip to the Sun %, three-axis attitude control and optimal control...... torque distribution in a reaction wheel assembly. The attitude controller is synthesized applying the energy shaping technique, where the desired potential function is carefully designed using a physical insight into the nature of the problem. The system stability is thoroughly analyzed and the control...

  20. Automatic video surveillance of outdoor scenes using track before detect

    DEFF Research Database (Denmark)

    Hansen, Morten; Sørensen, Helge Bjarup Dissing; Birkemark, Christian M.

    2005-01-01

    This paper concerns automatic video surveillance of outdoor scenes using a single camera. The first step in automatic interpretation of the video stream is activity detection based on background subtraction. Usually, this process will generate a large number of false alarms in outdoor scenes due...

  1. Acceptability of the use of cellular telephone and computer pictures/video for "pill counts" in buprenorphine maintenance treatment.

    Science.gov (United States)

    Welsh, Christopher

    2016-01-01

    As part of a comprehensive plan to attempt to minimize the diversion of prescribed controlled substances, many professional organization and licensing boards are recommending the use of "pill counts." This study sought to evaluate acceptability of the use of cellular phone and computer pictures/video for "pill counts" by patients in buprenorphine maintenance treatment. Patients prescribed buprenorphine/naloxone were asked a series of questions related to the type(s) of electronic communication to which they had access as well as their willingness to use these for the purpose of performing a "pill/film count." Of the 80 patients, 4 (5 percent) did not have a phone at all. Only 28 (35 percent) had a "smart phone" with some sort of data plan and Internet access. Forty (50 percent) of the patients had a phone with no camera and 10 (12.5 percent) had a phone with a camera but no video capability. All patients said that they would be willing to periodically use the video or camera on their phone or computer to have buprenorphine/naloxone pills or film counted as long as the communication was protected from electronic tampering. With the advent of applications for smart phones that allow for Health Insurance Portability and Accountability Act of 1996-compliant picture/video communication, a number of things can now be done that can enhance patient care as well as reduce the chances of misuse/diversion of prescribed medications. This could be used in settings where a larger proportion of controlled substances are prescribed including medication assisted therapy for opioid use disorders and pain management programs.

  2. Gamma camera computer system quality control for conventional and tomographic use

    International Nuclear Information System (INIS)

    Laird, E.E.; Allan, W.; Williams, E.D.

    1983-01-01

    The proposition that some of the proposed measurements of gamma camera performance parameters for routine quality control are redundant and that only the uniformity requires daily monitoring was examined. To test this proposition, measurements of gamma camera performance were carried out under normal operating conditions and also with the introduction of faults (offset window, offset PM tube). Results for the uniform flood field are presented for non-uniformity, intrinsic spatial resolution, linearity and relative system sensitivity. The response to introduced faults revealed that while the non-uniformity response pattern of the gamma camera was clearly affected, both measurements and qualitative indications of the other performance parameters did not necessarily show any deterioration. (U.K.)

  3. DistancePPG: Robust non-contact vital signs monitoring using a camera

    Science.gov (United States)

    Kumar, Mayank; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2015-01-01

    Vital signs such as pulse rate and breathing rate are currently measured using contact probes. But, non-contact methods for measuring vital signs are desirable both in hospital settings (e.g. in NICU) and for ubiquitous in-situ health tracking (e.g. on mobile phone and computers with webcams). Recently, camera-based non-contact vital sign monitoring have been shown to be feasible. However, camera-based vital sign monitoring is challenging for people with darker skin tone, under low lighting conditions, and/or during movement of an individual in front of the camera. In this paper, we propose distancePPG, a new camera-based vital sign estimation algorithm which addresses these challenges. DistancePPG proposes a new method of combining skin-color change signals from different tracked regions of the face using a weighted average, where the weights depend on the blood perfusion and incident light intensity in the region, to improve the signal-to-noise ratio (SNR) of camera-based estimate. One of our key contributions is a new automatic method for determining the weights based only on the video recording of the subject. The gains in SNR of camera-based PPG estimated using distancePPG translate into reduction of the error in vital sign estimation, and thus expand the scope of camera-based vital sign monitoring to potentially challenging scenarios. Further, a dataset will be released, comprising of synchronized video recordings of face and pulse oximeter based ground truth recordings from the earlobe for people with different skin tones, under different lighting conditions and for various motion scenarios. PMID:26137365

  4. Distant Measurement of Plethysmographic Signal in Various Lighting Conditions Using Configurable Frame-Rate Camera

    Directory of Open Access Journals (Sweden)

    Przybyło Jaromir

    2016-12-01

    Full Text Available Videoplethysmography is currently recognized as a promising noninvasive heart rate measurement method advantageous for ubiquitous monitoring of humans in natural living conditions. Although the method is considered for application in several areas including telemedicine, sports and assisted living, its dependence on lighting conditions and camera performance is still not investigated enough. In this paper we report on research of various image acquisition aspects including the lighting spectrum, frame rate and compression. In the experimental part, we recorded five video sequences in various lighting conditions (fluorescent artificial light, dim daylight, infrared light, incandescent light bulb using a programmable frame rate camera and a pulse oximeter as the reference. For a video sequence-based heart rate measurement we implemented a pulse detection algorithm based on the power spectral density, estimated using Welch’s technique. The results showed that lighting conditions and selected video camera settings including compression and the sampling frequency influence the heart rate detection accuracy. The average heart rate error also varies from 0.35 beats per minute (bpm for fluorescent light to 6.6 bpm for dim daylight.

  5. Performance Analysis of Video PHY Controller Using Unidirection and Bi-directional IO Standard via 7 Series FPGA

    DEFF Research Database (Denmark)

    Das, Bhagwan; Abdullah, M F L; Hussain, Dil muhammed Akbar

    2017-01-01

    graphics consumes more power, this creates a need of designing the low power design for Video PHY controller. In this paper, the performance of Video PHY controller is analyzed by comparing the power consumption of unidirectional and bi-directional IO Standard over 7 series FPGA. It is determined...... that total on-chip power is reduced for unidirectional IO Standard based Video PHY controller compared to bidirectional IO Standard based Video PHY controller. The most significant achievement of this work is that it is concluded that unidirectional IO Standard based Video PHY controller consume least...... standby power compared to bidirectional IO Standard based Video PHY controller. It is defined that for 6 GHz operated frequency Video PHY controller, the 32% total on-chip power is reduced using unidirectional IO Standard based Video PHY controller is less compared to bidirectional IO Standard based Video...

  6. Measuring the Angular Velocity of a Propeller with Video Camera Using Electronic Rolling Shutter

    Directory of Open Access Journals (Sweden)

    Yipeng Zhao

    2018-01-01

    Full Text Available Noncontact measurement for rotational motion has advantages over the traditional method which measures rotational motion by means of installing some devices on the object, such as a rotary encoder. Cameras can be employed as remote monitoring or inspecting sensors to measure the angular velocity of a propeller because of their commonplace availability, simplicity, and potentially low cost. A defect of the measurement with cameras is to process the massive data generated by cameras. In order to reduce the collected data from the camera, a camera using ERS (electronic rolling shutter is applied to measure angular velocities which are higher than the speed of the camera. The effect of rolling shutter can induce geometric distortion in the image, when the propeller rotates during capturing an image. In order to reveal the relationship between the angular velocity and the image distortion, a rotation model has been established. The proposed method was applied to measure the angular velocities of the two-blade propeller and the multiblade propeller. The experimental results showed that this method could detect the angular velocities which were higher than the camera speed, and the accuracy was acceptable.

  7. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    Science.gov (United States)

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  8. Reliability of video-based identification of footstrike pattern and video time frame at initial contact in recreational runners

    DEFF Research Database (Denmark)

    Damsted, Camma; Larsen, L H; Nielsen, R.O.

    2015-01-01

    and video time frame at initial contact during treadmill running using two-dimensional (2D) video recordings. METHODS: Thirty-one recreational runners were recorded twice, 1 week apart, with a high-speed video camera. Two blinded raters evaluated each video twice with an interval of at least 14 days....... RESULTS: Kappa values for within-day identification of footstrike pattern revealed intra-rater agreement of 0.83-0.88 and inter-rater agreement of 0.50-0.63. Corresponding figures for between-day identification of footstrike pattern were 0.63-0.69 and 0.41-0.53, respectively. Identification of video time...... in 36% of the identifications (kappa=0.41). The 95% limits of agreement for identification of video time frame at initial contact may, at times, allow for different identification of footstrike pattern. Clinicians should, therefore, be encouraged to continue using clinical 2D video setups for intra...

  9. Game Cinematography: from Camera Control to Player Emotions

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2016-01-01

    Building on the definition of cinematography (Soanes and Stevenson, Oxford dictionary of English. Oxford University Press, Oxford/New York, 2005), game cinematography can be defined as the art of visualizing the content of a computer game. The relationship between game cinematography and its...... traditional counterpart is extremely tight as, in both cases, the aim of cinematography is to control the viewer’s perspective and affect his or her perception of the events represented. However, game events are not necessarily pre-scripted and player interaction has a major role on the quality of a game...... experience; therefore, the role of the camera and the challenges connected to it are different in game cinematography as the virtual camera has to both dynamically react to unexpected events to correctly convey the game story and take into consideration player actions and desires to support her interaction...

  10. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    Science.gov (United States)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  11. Deflection control system for prestressed concrete bridges by CCD camera. CCD camera ni yoru prestressed concrete kyo no tawami kanri system

    Energy Technology Data Exchange (ETDEWEB)

    Noda, Y.; Nakayama, Y.; Arai, T. (Kawada Construction Co. Ltd., Tokyo (Japan))

    1994-03-15

    For the long-span prestressed concrete bridge (continuous box girder and cable stayed bridge), the design and construction control becomes increasingly complicated as construction proceeds because of its cyclic works. This paper describes the method and operation of an automatic levelling module using CCD camera and the experimental results by this system. For this automatic levelling system, the altitude can be automatically measured by measuring the center location of gravity of the target on the bridge surface using CCD camera. The present deflection control system developed compares the measured value by the automatic levelling system with the design value obtained by the design calculation system, and manages them. From the real-time continuous measurement for the long term, in which the CCD camera was set on the bridge surface, it was found that the stable measurement accuracy can be obtained. Successful application of this system demonstrates that the system is an effective and efficient construction aid. 11 refs., 19 figs., 1 tab.

  12. A practical implementation of free viewpoint video system for soccer games

    Science.gov (United States)

    Suenaga, Ryo; Suzuki, Kazuyoshi; Tezuka, Tomoyuki; Panahpour Tehrani, Mehrdad; Takahashi, Keita; Fujii, Toshiaki

    2015-03-01

    In this paper, we present a free viewpoint video generation system with billboard representation for soccer games. Free viewpoint video generation is a technology that enables users to watch 3-D objects from their desired viewpoints. Practical implementation of free viewpoint video for sports events is highly demanded. However, a commercially acceptable system has not yet been developed. The main obstacles are insufficient user-end quality of the synthesized images and highly complex procedures that sometimes require manual operations. In this work, we aim to develop a commercially acceptable free viewpoint video system with a billboard representation. A supposed scenario is that soccer games during the day can be broadcasted in 3-D, even in the evening of the same day. Our work is still ongoing. However, we have already developed several techniques to support our goal. First, we captured an actual soccer game at an official stadium where we used 20 full-HD professional cameras. Second, we have implemented several tools for free viewpoint video generation as follow. In order to facilitate free viewpoint video generation, all cameras should be calibrated. We calibrated all cameras using checker board images and feature points on the field (cross points of the soccer field lines). We extract each player region from captured images manually. The background region is estimated by observing chrominance changes of each pixel in temporal domain (automatically). Additionally, we have developed a user interface for visualizing free viewpoint video generation using a graphic library (OpenGL), which is suitable for not only commercialized TV sets but also devices such as smartphones. However, practical system has not yet been completed and our study is still ongoing.

  13. FPGA-Based Real-Time Motion Detection for Automated Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2016-03-01

    Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.

  14. Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and comparison with ISS-LIS and GLM

    Science.gov (United States)

    Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.

    2017-12-01

    Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and

  15. Replicas Strategy and Cache Optimization of Video Surveillance Systems Based on Cloud Storage

    Directory of Open Access Journals (Sweden)

    Rongheng Li

    2018-04-01

    Full Text Available With the rapid development of video surveillance technology, especially the popularity of cloud-based video surveillance applications, video data begins to grow explosively. However, in the cloud-based video surveillance system, replicas occupy an amount of storage space. Also, the slow response to video playback constrains the performance of the system. In this paper, considering the characteristics of video data comprehensively, we propose a dynamic redundant replicas mechanism based on security levels that can dynamically adjust the number of replicas. Based on the location correlation between cameras, this paper also proposes a data cache strategy to improve the response speed of data reading. Experiments illustrate that: (1 our dynamic redundant replicas mechanism can save storage space while ensuring data security; (2 the cache mechanism can predict the playback behaviors of the users in advance and improve the response speed of data reading according to the location and time correlation of the front-end cameras; and (3 in terms of cloud-based video surveillance, our proposed approaches significantly outperform existing methods.

  16. Distributed Sensing and Processing for Multi-Camera Networks

    Science.gov (United States)

    Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.

    Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.

  17. Credibility and Authenticity of Digitally Signed Videos in Traffic

    Directory of Open Access Journals (Sweden)

    Ivan Grgurević

    2008-11-01

    Full Text Available The paper presents the possibilities of insuring the credibilityand authenticity of the surveillance camera video by digitalsigning, using the public key infrastructure as part of interoperabletraffic and information system in the future intelligenttransport systems. The surveillance camera video is a sequenceof individual frames and a unique digital print, i. e. hash valueis calculated for each of these. By encryption of the hash valuesof the frames using private encryption key of the surveillancecentre, digital signatures are created and they are stored in thedatabase. The surveillance centre can issue a copy of the videoto all the interested subjects for scientific and research workand investigation. Regardless of the scope, each subsequentmanipulation of the video copy contents will certainly changethe hash value of all the frames. The procedure of determiningthe authenticity and credibility of videos is reduced to the comparisonof the hash values of the frames stored in the databaseof the surveillance centre with the values obtained from the interestedsubjects such as the traffic experts and investigators,surveillance-security services etc.

  18. Innovative Video Diagnostic Equipment for Material Science

    Science.gov (United States)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  19. Markerless registration for image guided surgery. Preoperative image, intraoperative video image, and patient

    International Nuclear Information System (INIS)

    Kihara, Tomohiko; Tanaka, Yuko

    1998-01-01

    Real-time and volumetric acquisition of X-ray CT, MR, and SPECT is the latest trend of the medical imaging devices. A clinical challenge is to use these multi-modality volumetric information complementary on patient in the entire diagnostic and surgical processes. The intraoperative image and patient integration intents to establish a common reference frame by image in diagnostic and surgical processes. This provides a quantitative measure during surgery, for which we have been relied mostly on doctors' skills and experiences. The intraoperative image and patient integration involves various technologies, however, we think one of the most important elements is the development of markerless registration, which should be efficient and applicable to the preoperative multi-modality data sets, intraoperative image, and patient. We developed a registration system which integrates preoperative multi-modality images, intraoperative video image, and patient. It consists of a real-time registration of video camera for intraoperative use, a markerless surface sampling matching of patient and image, our previous works of markerless multi-modality image registration of X-ray CT, MR, and SPECT, and an image synthesis on video image. We think these techniques can be used in many applications which involve video camera like devices such as video camera, microscope, and image Intensifier. (author)

  20. Distributed video data fusion and mining

    Science.gov (United States)

    Chang, Edward Y.; Wang, Yuan-Fang; Rodoplu, Volkan

    2004-09-01

    This paper presents an event sensing paradigm for intelligent event-analysis in a wireless, ad hoc, multi-camera, video surveillance system. In particilar, we present statistical methods that we have developed to support three aspects of event sensing: 1) energy-efficient, resource-conserving, and robust sensor data fusion and analysis, 2) intelligent event modeling and recognition, and 3) rapid deployment, dynamic configuration, and continuous operation of the camera networks. We outline our preliminary results, and discuss future directions that research might take.

  1. Video semaphore decoding for free-space optical communication

    Science.gov (United States)

    Last, Matthew; Fisher, Brian; Ezekwe, Chinwuba; Hubert, Sean M.; Patel, Sheetal; Hollar, Seth; Leibowitz, Brian S.; Pister, Kristofer S. J.

    2001-04-01

    Using teal-time image processing we have demonstrated a low bit-rate free-space optical communication system at a range of more than 20km with an average optical transmission power of less than 2mW. The transmitter is an autonomous one cubic inch microprocessor-controlled sensor node with a laser diode output. The receiver is a standard CCD camera with a 1-inch aperture lens, and both hardware and software implementations of the video semaphore decoding algorithm. With this system sensor data can be reliably transmitted 21 km form San Francisco to Berkeley.

  2. A real-time remote video streaming platform for ultrasound imaging.

    Science.gov (United States)

    Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel

    2016-08-01

    Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.

  3. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  4. An assessment of the effectiveness of high definition cameras as remote monitoring tools for dolphin ecology studies.

    Directory of Open Access Journals (Sweden)

    Estênio Guimarães Paiva

    Full Text Available Research involving marine mammals often requires costly field programs. This paper assessed whether the benefits of using cameras outweighs the implications of having personnel performing marine mammal detection in the field. The efficacy of video and still cameras to detect Indo-Pacific bottlenose dolphins (Tursiops aduncus in the Fremantle Harbour (Western Australia was evaluated, with consideration on how environmental conditions affect detectability. The cameras were set on a tower in the Fremantle Port channel and videos were perused at 1.75 times the normal speed. Images from the cameras were used to estimate position of dolphins at the water's surface. Dolphin detections ranged from 5.6 m to 463.3 m for the video camera, and from 10.8 m to 347.8 m for the still camera. Detection range showed to be satisfactory when compared to distances at which dolphins would be detected by field observers. The relative effect of environmental conditions on detectability was considered by fitting a Generalised Estimation Equations (GEEs model with Beaufort, level of glare and their interactions as predictors and a temporal auto-correlation structure. The best fit model indicated level of glare had an effect, with more intense periods of glare corresponding to lower occurrences of observed dolphins. However this effect was not large (-0.264 and the parameter estimate was associated with a large standard error (0.113. The limited field of view was the main restraint in that cameras can be only applied to detections of animals observed rather than counts of individuals. However, the use of cameras was effective for long term monitoring of occurrence of dolphins, outweighing the costs and reducing the health and safety risks to field personal. This study showed that cameras could be effectively implemented onshore for research such as studying changes in habitat use in response to development and construction activities.

  5. Measurement and protocol for evaluating video and still stabilization systems

    Science.gov (United States)

    Cormier, Etienne; Cao, Frédéric; Guichard, Frédéric; Viard, Clément

    2013-01-01

    This article presents a system and a protocol to characterize image stabilization systems both for still images and videos. It uses a six axes platform, three being used for camera rotation and three for camera positioning. The platform is programmable and can reproduce complex motions that have been typically recorded by a gyroscope mounted on different types of cameras in different use cases. The measurement uses a single chart for still image and videos, the texture dead leaves chart. Although the proposed implementation of the protocol uses a motion platform, the measurement itself does not rely on any specific hardware. For still images, a modulation transfer function is measured in different directions and is weighted by a contrast sensitivity function (simulating the human visual system accuracy) to obtain an acutance. The sharpness improvement due to the image stabilization system is a good measurement of performance as recommended by a CIPA standard draft. For video, four markers on the chart are detected with sub-pixel accuracy to determine a homographic deformation between the current frame and a reference position. This model describes well the apparent global motion as translations, but also rotations along the optical axis and distortion due to the electronic rolling shutter equipping most CMOS sensors. The protocol is applied to all types of cameras such as DSC, DSLR and smartphones.

  6. Cellphones in Classrooms Land Teachers on Online Video Sites

    Science.gov (United States)

    Honawar, Vaishali

    2007-01-01

    Videos of teachers that students taped in secrecy are all over online sites like YouTube and MySpace. Angry teachers, enthusiastic teachers, teachers clowning around, singing, and even dancing are captured, usually with camera phones, for the whole world to see. Some students go so far as to create elaborately edited videos, shot over several…

  7. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    Energy Technology Data Exchange (ETDEWEB)

    WERRY, S.M.

    2000-03-23

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151.

  8. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    International Nuclear Information System (INIS)

    WERRY, S.M.

    2000-01-01

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151

  9. vid113_0401r -- Video groundtruthing collected from RV Tatoosh during August 2005.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  10. Towards real-time remote processing of laparoscopic video

    Science.gov (United States)

    Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.

    2015-03-01

    Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.

  11. Assessment of Machine Learning Algorithms for Automatic Benthic Cover Monitoring and Mapping Using Towed Underwater Video Camera and High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Hassan Mohamed

    2018-05-01

    Full Text Available Benthic habitat monitoring is essential for many applications involving biodiversity, marine resource management, and the estimation of variations over temporal and spatial scales. Nevertheless, both automatic and semi-automatic analytical methods for deriving ecologically significant information from towed camera images are still limited. This study proposes a methodology that enables a high-resolution towed camera with a Global Navigation Satellite System (GNSS to adaptively monitor and map benthic habitats. First, the towed camera finishes a pre-programmed initial survey to collect benthic habitat videos, which can then be converted to geo-located benthic habitat images. Second, an expert labels a number of benthic habitat images to class habitats manually. Third, attributes for categorizing these images are extracted automatically using the Bag of Features (BOF algorithm. Fourth, benthic cover categories are detected automatically using Weighted Majority Voting (WMV ensembles for Support Vector Machines (SVM, K-Nearest Neighbor (K-NN, and Bagging (BAG classifiers. Fifth, WMV-trained ensembles can be used for categorizing more benthic cover images automatically. Finally, correctly categorized geo-located images can provide ground truth samples for benthic cover mapping using high-resolution satellite imagery. The proposed methodology was tested over Shiraho, Ishigaki Island, Japan, a heterogeneous coastal area. The WMV ensemble exhibited 89% overall accuracy for categorizing corals, sediments, seagrass, and algae species. Furthermore, the same WMV ensemble produced a benthic cover map using a Quickbird satellite image with 92.7% overall accuracy.

  12. APPLYING CCD CAMERAS IN STEREO PANORAMA SYSTEMS FOR 3D ENVIRONMENT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    A. Sh. Amini

    2012-07-01

    Full Text Available Proper recontruction of 3D environments is nowadays needed by many organizations and applications. In addition to conventional methods the use of stereo panoramas is an appropriate technique to use due to simplicity, low cost and the ability to view an environment the way it is in reality. This paper investigates the ability of applying stereo CCD cameras for 3D reconstruction and presentation of the environment and geometric measuring among that. For this purpose, a rotating stereo panorama was established using two CCDs with a base-length of 350 mm and a DVR (digital video recorder box. The stereo system was first calibrated using a 3D test-field and used to perform accurate measurements. The results of investigating the system in a real environment showed that although this kind of cameras produce noisy images and they do not have appropriate geometric stability, but they can be easily synchronized, well controlled and reasonable accuracy (about 40 mm in objects at 12 meters distance from the camera can be achieved.

  13. Playing with the Camera - Creating with Each Other

    DEFF Research Database (Denmark)

    Vestergaard, Vitus

    2015-01-01

    , it is imperative to investigate how museum users in a group create videos and engage with each other and the exhibits. Based on research on young users creating videos in the Media Mixer, this article explores what happens during the creative process in front of a camera. Drawing upon theories of museology, media......Many contemporary museums try to involve users as active participants in a range of new ways. One way to engage young people with visual culture is through exhibits where users produce their own videos. Since museum experiences are largely social in nature and based on the group as a social unit......, learning and creativity, the article discusses how to operationalize and make sense of seemingly chaotic or banal production processes in a museum....

  14. Vehicular camera pedestrian detection research

    Science.gov (United States)

    Liu, Jiahui

    2018-03-01

    With the rapid development of science and technology, it has made great development, but at the same time of highway traffic more convenient in highway traffic and transportation. However, in the meantime, traffic safety accidents occur more and more frequently in China. In order to deal with the increasingly heavy traffic safety. So, protecting the safety of people's personal property and facilitating travel has become a top priority. The real-time accurate pedestrian and driving environment are obtained through a vehicular camera which are used to detection and track the preceding moving targets. It is popular in the domain of intelligent vehicle safety driving, autonomous navigation and traffic system research. Based on the pedestrian video obtained by the Vehicular Camera, this paper studies the trajectory of pedestrian detection and its algorithm.

  15. Distributed embedded smart cameras architectures, design and applications

    CERN Document Server

    Velipasalar, Senem

    2014-01-01

    This publication addresses distributed embedded smart camerascameras that perform onboard analysis and collaborate with other cameras. This book provides the material required to better understand the architectural design challenges of embedded smart camera systems, the hardware/software ecosystem, the design approach for, and applications of distributed smart cameras together with the state-of-the-art algorithms. The authors concentrate on the architecture, hardware/software design, realization of smart camera networks from applications to architectures, in particular in the embedded and mobile domains. •                    Examines energy issues related to wireless communication such as decreasing energy consumption to increase battery-life •                    Discusses processing large volumes of video data on an embedded environment in real-time •                    Covers design of realistic applications of distributed and embedded smart...

  16. Cross-Layer Design of Source Rate Control and Congestion Control for Wireless Video Streaming

    Directory of Open Access Journals (Sweden)

    Peng Zhu

    2007-01-01

    Full Text Available Cross-layer design has been used in streaming video over the wireless channels to optimize the overall system performance. In this paper, we extend our previous work on joint design of source rate control and congestion control for video streaming over the wired channel, and propose a cross-layer design approach for wireless video streaming. First, we extend the QoS-aware congestion control mechanism (TFRCC proposed in our previous work to the wireless scenario, and provide a detailed discussion about how to enhance the overall performance in terms of rate smoothness and responsiveness of the transport protocol. Then, we extend our previous joint design work to the wireless scenario, and a thorough performance evaluation is conducted to investigate its performance. Simulation results show that by cross-layer design of source rate control at application layer and congestion control at transport layer, and by taking advantage of the MAC layer information, our approach can avoid the throughput degradation caused by wireless link error, and better support the QoS requirements of the application. Thus, the playback quality is significantly improved, while good performance of the transport protocol is still preserved.

  17. Teasing Apart Complex Motions using VideoPoint

    Science.gov (United States)

    Fischer, Mark

    2002-10-01

    Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.

  18. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    Science.gov (United States)

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  19. Content Area Vocabulary Videos in Multiple Contexts: A Pedagogical Tool

    Science.gov (United States)

    Webb, C. Lorraine; Kapavik, Robin Robinson

    2015-01-01

    The authors challenged pre-service teachers to digitally define a social studies or mathematical vocabulary term in multiple contexts using a digital video camera. The researchers sought to answer the following questions: 1. How will creating a video for instruction affect pre-service teachers' attitudes about teaching with technology, if at all?…

  20. The design of red-blue 3D video fusion system based on DM642

    Science.gov (United States)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  1. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2016-03-01

    Full Text Available In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  2. Evaluation of stereoscopic video cameras synchronized with the movement of an operator's head on the teleoperation of the actual backhoe shovel

    Science.gov (United States)

    Minamoto, Masahiko; Matsunaga, Katsuya

    1999-05-01

    Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.

  3. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    Science.gov (United States)

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik

    2017-03-01

    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  4. Analysis of Camera Arrays Applicable to the Internet of Things.

    Science.gov (United States)

    Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing

    2016-03-22

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.

  5. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yasaman Samei

    2008-08-01

    Full Text Available Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN. With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture. This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  6. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks.

    Science.gov (United States)

    Aghdasi, Hadi S; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman

    2008-08-04

    Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  7. Opto-mechanical design of the G-CLEF flexure control camera system

    Science.gov (United States)

    Oh, Jae Sok; Park, Chan; Kim, Jihun; Kim, Kang-Min; Chun, Moo-Young; Yu, Young Sam; Lee, Sungho; Nah, Jakyoung; Park, Sung-Joon; Szentgyorgyi, Andrew; McMuldroch, Stuart; Norton, Timothy; Podgorski, William; Evans, Ian; Mueller, Mark; Uomoto, Alan; Crane, Jeffrey; Hare, Tyson

    2016-08-01

    The GMT-Consortium Large Earth Finder (G-CLEF) is the very first light instrument of the Giant Magellan Telescope (GMT). The G-CLEF is a fiber feed, optical band echelle spectrograph that is capable of extremely precise radial velocity measurement. KASI (Korea Astronomy and Space Science Institute) is responsible for Flexure Control Camera (FCC) included in the G-CLEF Front End Assembly (GCFEA). The FCC is a kind of guide camera, which monitors the field images focused on a fiber mirror to control the flexure and the focus errors within the GCFEA. The FCC consists of five optical components: a collimator including triple lenses for producing a pupil, neutral density filters allowing us to use much brighter star as a target or a guide, a tent prism as a focus analyzer for measuring the focus offset at the fiber mirror, a reimaging camera with three pair of lenses for focusing the beam on a CCD focal plane, and a CCD detector for capturing the image on the fiber mirror. In this article, we present the optical and mechanical FCC designs which have been modified after the PDR in April 2015.

  8. A video Hartmann wavefront diagnostic that incorporates a monolithic microlens array

    International Nuclear Information System (INIS)

    Toeppen, J.S.; Bliss, E.S.; Long, T.W.; Salmon, J.T.

    1991-07-01

    we have developed a video Hartmann wavefront sensor that incorporates a monolithic array of microlenses as the focusing elements. The sensor uses a monolithic array of photofabricated lenslets. Combined with a video processor, this system reveals local gradients of the wavefront at a video frame rate of 30 Hz. Higher bandwidth is easily attainable with a camera and video processor that have faster frame rates. When used with a temporal filter, the reconstructed wavefront error is less than 1/10th wave

  9. Real-time multi-camera video acquisition and processing platform for ADAS

    Science.gov (United States)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  10. Prevalence of Behavior Changing Strategies in Fitness Video Games: Theory-Based Content Analysis

    Science.gov (United States)

    Hatkevich, Claire

    2013-01-01

    Background Fitness video games are popular, but little is known about their content. Because many contain interactive tools that mimic behavioral strategies from weight loss intervention programs, it is possible that differences in content could affect player physical activity and/or weight outcomes. There is a need for a better understanding of what behavioral strategies are currently available in fitness games and how they are implemented. Objective The purpose of this study was to investigate the prevalence of evidence-based behavioral strategies across fitness video games available for home use. Games available for consoles that used camera-based controllers were also contrasted with games available for a console that used handheld motion controllers. Methods Fitness games (N=18) available for three home consoles were systematically identified and play-tested by 2 trained coders for at least 3 hours each. In cases of multiple games from one series, only the most recently released game was included. The Sony PlayStation 3 and Microsoft Xbox360 were the two camera-based consoles, and the Nintendo Wii was the handheld motion controller console. A coding list based on a taxonomy of behavioral strategies was used to begin coding. Codes were refined in an iterative process based on data found during play-testing. Results The most prevalent behavioral strategies were modeling (17/18), specific performance feedback (17/18), reinforcement (16/18), caloric expenditure feedback (15/18), and guided practice (15/18). All games included some kind of feedback on performance accuracy, exercise frequency, and/or fitness progress. Action planning (scheduling future workouts) was the least prevalent of the included strategies (4/18). Twelve games included some kind of social integration, with nine of them providing options for real-time multiplayer sessions. Only two games did not feature any kind of reward. Games for the camera-based consoles (mean 12.89, SD 2.71) included a

  11. Advanced methods for image registration applied to JET videos

    Energy Technology Data Exchange (ETDEWEB)

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Murari, Andrea [Consorzio RFX, Associazione EURATOM-ENEA per la Fusione, Padova (Italy); Gelfusa, Michela [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Tiseanu, Ion; Zoita, Vasile [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Arnoux, Gilles [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon (United Kingdom)

    2015-10-15

    Graphical abstract: - Highlights: • Development of an image registration method for JET IR and fast visible cameras. • Method based on SIFT descriptors and coherent point drift points set registration technique. • Method able to deal with extremely noisy images and very low luminosity images. • Computation time compatible with the inter-shot analysis. - Abstract: The last years have witnessed a significant increase in the use of digital cameras on JET. They are routinely applied for imaging in the IR and visible spectral regions. One of the main technical difficulties in interpreting the data of camera based diagnostics is the presence of movements of the field of view. Small movements occur due to machine shaking during normal pulses while large ones may arise during disruptions. Some cameras show a correlation of image movement with change of magnetic field strength. For deriving unaltered information from the videos and for allowing correct interpretation an image registration method, based on highly distinctive scale invariant feature transform (SIFT) descriptors and on the coherent point drift (CPD) points set registration technique, has been developed. The algorithm incorporates a complex procedure for rejecting outliers. The method has been applied for vibrations correction to videos collected by the JET wide angle infrared camera and for the correction of spurious rotations in the case of the JET fast visible camera (which is equipped with an image intensifier). The method has proved to be able to deal with the images provided by this camera frequently characterized by low contrast and a high level of blurring and noise.

  12. Intelligent control for scalable video processing

    NARCIS (Netherlands)

    Wüst, C.C.

    2006-01-01

    In this thesis we study a problem related to cost-effective video processing in software by consumer electronics devices, such as digital TVs. Video processing is the task of transforming an input video signal into an output video signal, for example to improve the quality of the signal. This

  13. Principal axis-based correspondence between multiple cameras for people tracking.

    Science.gov (United States)

    Hu, Weiming; Hu, Min; Zhou, Xue; Tan, Tieniu; Lou, Jianguang; Maybank, Steve

    2006-04-01

    Visual surveillance using multiple cameras has attracted increasing interest in recent years. Correspondence between multiple cameras is one of the most important and basic problems which visual surveillance using multiple cameras brings. In this paper, we propose a simple and robust method, based on principal axes of people, to match people across multiple cameras. The correspondence likelihood reflecting the similarity of pairs of principal axes of people is constructed according to the relationship between "ground-points" of people detected in each camera view and the intersections of principal axes detected in different camera views and transformed to the same view. Our method has the following desirable properties: 1) Camera calibration is not needed. 2) Accurate motion detection and segmentation are less critical due to the robustness of the principal axis-based feature to noise. 3) Based on the fused data derived from correspondence results, positions of people in each camera view can be accurately located even when the people are partially occluded in all views. The experimental results on several real video sequences from outdoor environments have demonstrated the effectiveness, efficiency, and robustness of our method.

  14. Using the OOI Cabled Array HD Camera to Explore Geophysical and Oceanographic Problems at Axial Seamount

    Science.gov (United States)

    Crone, T. J.; Knuth, F.; Marburg, A.

    2016-12-01

    A broad array of Earth science problems can be investigated using high-definition video imagery from the seafloor, ranging from those that are geological and geophysical in nature, to those that are biological and water-column related. A high-definition video camera was installed as part of the Ocean Observatory Initiative's core instrument suite on the Cabled Array, a real-time fiber optic data and power system that stretches from the Oregon Coast to Axial Seamount on the Juan de Fuca Ridge. This camera runs a 14-minute pan-tilt-zoom routine 8 times per day, focusing on locations of scientific interest on and near the Mushroom vent in the ASHES hydrothermal field inside the Axial caldera. The system produces 13 GB of lossless HD video every 3 hours, and at the time of this writing it has generated 2100 recordings totaling 28.5 TB since it began streaming data into the OOI archive in August of 2015. Because of the large size of this dataset, downloading the entirety of the video for long timescale investigations is not practical. We are developing a set of user-side tools for downloading single frames and frame ranges from the OOI HD camera raw data archive to aid users interested in using these data for their research. We use these tools to download about one year's worth of partial frame sets to investigate several questions regarding the hydrothermal system at ASHES, including the variability of bacterial "floc" in the water-column, and changes in high temperature fluid fluxes using optical flow techniques. We show that while these user-side tools can facilitate rudimentary scientific investigations using the HD camera data, a server-side computing environment that allows users to explore this dataset without downloading any raw video will be required for more advanced investigations to flourish.

  15. Measurement of the Dynamic Displacements of Railway Bridges Using Video Technology

    Directory of Open Access Journals (Sweden)

    Ribeiro Diogo

    2015-01-01

    Full Text Available This article describes the development of a non-contact dynamic displacement measurement system for railway bridges based on video technology. The system, consisting of a high speed video camera, an optical lens, lighting lamps and a precision target, can perform measurements with high precision for distances from the camera to the target up to 25 m, with acquisition frame rates ranging from 64 fps to 500 fps, and be articulated with other measurement systems, which promotes its integration in structural health monitoring systems. The system’s performance was evaluated based on two tests, one in the laboratory and other on the field. The laboratory test evaluated the performance of the system in measuring the displacement of a steel beam, subjected to a point load applied dynamically, for distances from the camera to the target between 3 m and 15 m. The field test allowed evaluating the system’s performance in the dynamic measurement of the displacement of a point on the deck of a railway bridge, induced by passing trains at speeds between 160 km/h and 180 km/h, for distances from the camera to the target up to 25 m. The results of both tests show a very good agreement between the displacement measurement obtained with the video system and with a LVDT.

  16. Creating personalized memories from social events: Community-based support for multi-camera recordings of school concerts

    OpenAIRE

    Guimaraes R.L.; Cesar P.; Bulterman D.C.A.; Zsombori V.; Kegel I.

    2011-01-01

    htmlabstractThe wide availability of relatively high-quality cameras makes it easy for many users to capture video fragments of social events such as concerts, sports events or community gatherings. The wide availability of simple sharing tools makes it nearly as easy to upload individual fragments to on-line video sites. Current work on video mashups focuses on the creation of a video summary based on the characteristics of individual media fragments, but it fails to address the interpersona...

  17. The 3D Human Motion Control Through Refined Video Gesture Annotation

    Science.gov (United States)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  18. Bring your own camera to the trap: An inexpensive, versatile, and portable triggering system tested on wild hummingbirds.

    Science.gov (United States)

    Rico-Guevara, Alejandro; Mickley, James

    2017-07-01

    The study of animals in the wild offers opportunities to collect relevant information on their natural behavior and abilities to perform ecologically relevant tasks. However, it also poses challenges such as accounting for observer effects, human sensory limitations, and the time intensiveness of this type of research. To meet these challenges, field biologists have deployed camera traps to remotely record animal behavior in the wild. Despite their ubiquity in research, many commercial camera traps have limitations, and the species and behavior of interest may present unique challenges. For example, no camera traps support high-speed video recording. We present a new and inexpensive camera trap system that increases versatility by separating the camera from the triggering mechanism. Our system design can pair with virtually any camera and allows for independent positioning of a variety of sensors, all while being low-cost, lightweight, weatherproof, and energy efficient. By using our specialized trigger and customized sensor configurations, many limitations of commercial camera traps can be overcome. We use this system to study hummingbird feeding behavior using high-speed video cameras to capture fast movements and multiple sensors placed away from the camera to detect small body sizes. While designed for hummingbirds, our application can be extended to any system where specialized camera or sensor features are required, or commercial camera traps are cost-prohibitive, allowing camera trap use in more research avenues and by more researchers.

  19. Heterogeneous CPU-GPU moving targets detection for UAV video

    Science.gov (United States)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  20. Detecting fire in video stream using statistical analysis

    Directory of Open Access Journals (Sweden)

    Koplík Karel

    2017-01-01

    Full Text Available The real time fire detection in video stream is one of the most interesting problems in computer vision. In fact, in most cases it would be nice to have fire detection algorithm implemented in usual industrial cameras and/or to have possibility to replace standard industrial cameras with one implementing the fire detection algorithm. In this paper, we present new algorithm for detecting fire in video. The algorithm is based on tracking suspicious regions in time with statistical analysis of their trajectory. False alarms are minimized by combining multiple detection criteria: pixel brightness, trajectories of suspicious regions for evaluating characteristic fire flickering and persistence of alarm state in sequence of frames. The resulting implementation is fast and therefore can run on wide range of affordable hardware.

  1. Video game practice optimizes executive control skills in dual-task and task switching situations.

    Science.gov (United States)

    Strobach, Tilo; Frensch, Peter A; Schubert, Torsten

    2012-05-01

    We examined the relation of action video game practice and the optimization of executive control skills that are needed to coordinate two different tasks. As action video games are similar to real life situations and complex in nature, and include numerous concurrent actions, they may generate an ideal environment for practicing these skills (Green & Bavelier, 2008). For two types of experimental paradigms, dual-task and task switching respectively; we obtained performance advantages for experienced video gamers compared to non-gamers in situations in which two different tasks were processed simultaneously or sequentially. This advantage was absent in single-task situations. These findings indicate optimized executive control skills in video gamers. Similar findings in non-gamers after 15 h of action video game practice when compared to non-gamers with practice on a puzzle game clarified the causal relation between video game practice and the optimization of executive control skills. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Automatic Quadcopter Control Avoiding Obstacle Using Camera with Integrated Ultrasonic Sensor

    Science.gov (United States)

    Anis, Hanafi; Haris Indra Fadhillah, Ahmad; Darma, Surya; Soekirno, Santoso

    2018-04-01

    Automatic navigation on the drone is being developed these days, a wide variety of types of drones and its automatic functions. Drones used in this study was an aircraft with four propellers or quadcopter. In this experiment, image processing used to recognize the position of an object and ultrasonic sensor used to detect obstacle distance. The method used to trace an obsctacle in image processing was the Lucas-Kanade-Tomasi Tracker, which had been widely used due to its high accuracy. Ultrasonic sensor used to complement the image processing success rate to be fully detected object. The obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors. Visual feedback control based PID controllers are used as a control of drones movement. The conclusion of the obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors.

  3. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    Science.gov (United States)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  4. Precise X-ray and video overlay for augmented reality fluoroscopy.

    Science.gov (United States)

    Chen, Xin; Wang, Lejing; Fallavollita, Pascal; Navab, Nassir

    2013-01-01

    The camera-augmented mobile C-arm (CamC) augments any mobile C-arm by a video camera and mirror construction and provides a co-registration of X-ray with video images. The accurate overlay between these images is crucial to high-quality surgical outcomes. In this work, we propose a practical solution that improves the overlay accuracy for any C-arm orientation by: (i) improving the existing CamC calibration, (ii) removing distortion effects, and (iii) accounting for the mechanical sagging of the C-arm gantry due to gravity. A planar phantom is constructed and placed at different distances to the image intensifier in order to obtain the optimal homography that co-registers X-ray and video with a minimum error. To alleviate distortion, both X-ray calibration based on equidistant grid model and Zhang's camera calibration method are implemented for distortion correction. Lastly, the virtual detector plane (VDP) method is adapted and integrated to reduce errors due to the mechanical sagging of the C-arm gantry. The overlay errors are 0.38±0.06 mm when not correcting for distortion, 0.27±0.06 mm when applying Zhang's camera calibration, and 0.27±0.05 mm when applying X-ray calibration. Lastly, when taking into account all angular and orbital rotations of the C-arm, as well as correcting for distortion, the overlay errors are 0.53±0.24 mm using VDP and 1.67±1.25 mm excluding VDP. The augmented reality fluoroscope achieves an accurate video and X-ray overlay when applying the optimal homography calculated from distortion correction using X-ray calibration together with the VDP.

  5. A new video studio for CERN

    CERN Multimedia

    Anaïs Vernede

    2011-01-01

    On Monday, 14 February 2011 CERN's new video studio was inaugurated with a recording of "Spotlight on CERN", featuring an interview with the DG, Rolf Heuer.   CERN's new video studio. Almost all international organisations have a studio for their audiovisual communications, and now it's CERN’s turn to acquire such a facility. “In the past, we've made videos using the Globe audiovisual facilities and sometimes using the small photographic studio, which is equipped with simple temporary sets that aren’t really suitable for video,” explains Jacques Fichet, head of CERN‘s audiovisual service. Once the decision had been taken to create the new 100 square-metre video studio, the work took only five months to complete. The studio, located in Building 510, is equipped with a cyclorama (a continuous smooth white wall used as a background) measuring 3 m in height and 16 m in length, as well as a teleprompter, a rail-mounted camera dolly fo...

  6. Energy minimization of mobile video devices with a hardware H.264/AVC encoder based on energy-rate-distortion optimization

    Science.gov (United States)

    Kang, Donghun; Lee, Jungeon; Jung, Jongpil; Lee, Chul-Hee; Kyung, Chong-Min

    2014-09-01

    In mobile video systems powered by battery, reducing the encoder's compression energy consumption is critical to prolong its lifetime. Previous Energy-rate-distortion (E-R-D) optimization methods based on a software codec is not suitable for practical mobile camera systems because the energy consumption is too large and encoding rate is too low. In this paper, we propose an E-R-D model for the hardware codec based on the gate-level simulation framework to measure the switching activity and the energy consumption. From the proposed E-R-D model, an energy minimizing algorithm for mobile video camera sensor have been developed with the GOP (Group of Pictures) size and QP(Quantization Parameter) as run-time control variables. Our experimental results show that the proposed algorithm provides up to 31.76% of energy consumption saving while satisfying the rate and distortion constraints.

  7. Shot Boundary Detection in Soccer Video using Twin-comparison Algorithm and Dominant Color Region

    Directory of Open Access Journals (Sweden)

    Matko Šarić

    2008-06-01

    Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.

  8. Citizen Camera-Witnessing: A Case Study of the Umbrella Movement

    Directory of Open Access Journals (Sweden)

    Wai Han Lo

    2016-08-01

    Full Text Available Citizen camera-witness is a new concept by which to describe using mobile camera phone to engage in civic expression. I argue that the meaning of this concept should not be limited to painful testimony; instead, it is a mode of civic camera-mediated mass self-testimony to brutality. The use of mobile phone recordings in Hong Kong’s Umbrella Movement is examined to understand how mobile cameras are employed as personal witnessing devices to provide recordings to indict unjust events and engage others in the civic movement. This study has examined the Facebook posts and You Tube videos of the Umbrella Movement between September 22, 2014 and December 22, 2014. The results suggest that the camera phone not only contributes to witnessing the brutal repression of the state, but also witnesses the beauty of the movement, and provides a testimony that allows for rituals to develop and semi-codes to be transformed.

  9. NEMA NU-1 2007 based and independent quality control software for gamma cameras and SPECT

    International Nuclear Information System (INIS)

    Vickery, A; Joergensen, T; De Nijs, R

    2011-01-01

    A thorough quality assurance of gamma and SPECT cameras requires a careful handling of the measured quality control (QC) data. Most gamma camera manufacturers provide the users with camera specific QC Software. This QC software is indeed a useful tool for the following of day-to-day performance of a single camera. However, when it comes to objective performance comparison of different gamma cameras and a deeper understanding of the calculated numbers, the use of camera specific QC software without access to the source code is rather avoided. Calculations and definitions might differ, and manufacturer independent standardized results are preferred. Based upon the NEMA Standards Publication NU 1-2007, we have developed a suite of easy-to-use data handling software for processing acquired QC data providing the user with instructive images and text files with the results.

  10. THE SYSTEM OF TECHNICAL VISION IN THE ARCHITECTURE OF THE REMOTE CONTROL SYSTEM

    Directory of Open Access Journals (Sweden)

    S. V. Shavetov

    2014-03-01

    Full Text Available The paper deals with the development of video broadcasting system in view of controlling mobile robots over the Internet. A brief overview of the issues and their solutions, encountered in the real-time broadcasting video stream, is given. Affordable and versatile solutions of technical vision are considered. An approach for frame-accurate video rebroadcasting to unlimited number of end-users is proposed. The optimal performance parameters of network equipment for the final number of cameras are defined. System approbation on five IP cameras of different manufacturers is done. The average time delay for broadcasting in MJPEG format over the local network was 200 ms and 500 ms over the Internet.

  11. Use of video taping during simulator training

    International Nuclear Information System (INIS)

    Helton, M.; Young, P.

    1987-01-01

    The use of a video camera for training is not a new idea and is used throughout the country for training in such areas as computers, car repair, music and even in such non-technical areas as fishing. Reviewing a taped simulator training session will aid the student in his job performance regardless of the position he holds in his organization. If the student is to be examined on simulator performance, video will aid in this training in many different ways

  12. Using Stereo Vision to Support the Automated Analysis of Surveillance Videos

    Science.gov (United States)

    Menze, M.; Muhle, D.

    2012-07-01

    Video surveillance systems are no longer a collection of independent cameras, manually controlled by human operators. Instead, smart sensor networks are developed, able to fulfil certain tasks on their own and thus supporting security personnel by automated analyses. One well-known task is the derivation of people's positions on a given ground plane from monocular video footage. An improved accuracy for the ground position as well as a more detailed representation of single salient people can be expected from a stereoscopic processing of overlapping views. Related work mostly relies on dedicated stereo devices or camera pairs with a small baseline. While this set-up is helpful for the essential step of image matching, the high accuracy potential of a wide baseline and the according good intersection geometry is not utilised. In this paper we present a stereoscopic approach, working on overlapping views of standard pan-tilt-zoom cameras which can easily be generated for arbitrary points of interest by an appropriate reconfiguration of parts of a sensor network. Experiments are conducted on realistic surveillance footage to show the potential of the suggested approach and to investigate the influence of different baselines on the quality of the derived surface model. Promising estimations of people's position and height are retrieved. Although standard matching approaches show helpful results, future work will incorporate temporal dependencies available from image sequences in order to reduce computational effort and improve the derived level of detail.

  13. USING STEREO VISION TO SUPPORT THE AUTOMATED ANALYSIS OF SURVEILLANCE VIDEOS

    Directory of Open Access Journals (Sweden)

    M. Menze

    2012-07-01

    Full Text Available Video surveillance systems are no longer a collection of independent cameras, manually controlled by human operators. Instead, smart sensor networks are developed, able to fulfil certain tasks on their own and thus supporting security personnel by automated analyses. One well-known task is the derivation of people’s positions on a given ground plane from monocular video footage. An improved accuracy for the ground position as well as a more detailed representation of single salient people can be expected from a stereoscopic processing of overlapping views. Related work mostly relies on dedicated stereo devices or camera pairs with a small baseline. While this set-up is helpful for the essential step of image matching, the high accuracy potential of a wide baseline and the according good intersection geometry is not utilised. In this paper we present a stereoscopic approach, working on overlapping views of standard pan-tilt-zoom cameras which can easily be generated for arbitrary points of interest by an appropriate reconfiguration of parts of a sensor network. Experiments are conducted on realistic surveillance footage to show the potential of the suggested approach and to investigate the influence of different baselines on the quality of the derived surface model. Promising estimations of people’s position and height are retrieved. Although standard matching approaches show helpful results, future work will incorporate temporal dependencies available from image sequences in order to reduce computational effort and improve the derived level of detail.

  14. Video Surveillance in Mental Health Facilities: Is it Ethical?

    Science.gov (United States)

    Stolovy, Tali; Melamed, Yuval; Afek, Arnon

    2015-05-01

    Video surveillance is a tool for managing safety and security within public spaces. In mental health facilities, the major benefit of video surveillance is that it enables 24 hour monitoring of patients, which has the potential to reduce violent and aggressive behavior. The major disadvantage is that such observation is by nature intrusive. It diminishes privacy, a factor of huge importance for psychiatric inpatients. Thus, an ongoing debate has developed following the increasing use of cameras in this setting. This article presents the experience of a medium-large academic state hospital that uses video surveillance, and explores the various ethical and administrative aspects of video surveillance in mental health facilities.

  15. Implementation of smart phone video plethysmography and dependence on lighting parameters.

    Science.gov (United States)

    Fletcher, Richard Ribón; Chamberlain, Daniel; Paggi, Nicholas; Deng, Xinyue

    2015-08-01

    The remote measurement of heart rate (HR) and heart rate variability (HRV) via a digital camera (video plethysmography) has emerged as an area of great interest for biomedical and health applications. While a few implementations of video plethysmography have been demonstrated on smart phones under controlled lighting conditions, it has been challenging to create a general scalable solution due to the large variability in smart phone hardware performance, software architecture, and the variable response to lighting parameters. In this context, we present a selfcontained smart phone implementation of video plethysmography for Android OS, which employs both stochastic and deterministic algorithms, and we use this to study the effect of lighting parameters (illuminance, color spectrum) on the accuracy of the remote HR measurement. Using two different phone models, we present the median HR error for five different video plethysmography algorithms under three different types of lighting (natural sunlight, compact fluorescent, and halogen incandescent) and variations in brightness. For most algorithms, we found the optimum light brightness to be in the range 1000-4000 lux and the optimum lighting types to be compact fluorescent and natural light. Moderate errors were found for most algorithms with some devices under conditions of low-brightness (4000 lux). Our analysis also identified camera frame rate jitter as a major source of variability and error across different phone models, but this can be largely corrected through non-linear resampling. Based on testing with six human subjects, our real-time Android implementation successfully predicted the measured HR with a median error of -0.31 bpm, and an inter-quartile range of 2.1bpm.

  16. Automatic Traffic Data Collection under Varying Lighting and Temperature Conditions in Multimodal Environments: Thermal versus Visible Spectrum Video-Based Systems

    Directory of Open Access Journals (Sweden)

    Ting Fu

    2017-01-01

    Full Text Available Vision-based monitoring systems using visible spectrum (regular video cameras can complement or substitute conventional sensors and provide rich positional and classification data. Although new camera technologies, including thermal video sensors, may improve the performance of digital video-based sensors, their performance under various conditions has rarely been evaluated at multimodal facilities. The purpose of this research is to integrate existing computer vision methods for automated data collection and evaluate the detection, classification, and speed measurement performance of thermal video sensors under varying lighting and temperature conditions. Thermal and regular video data was collected simultaneously under different conditions across multiple sites. Although the regular video sensor narrowly outperformed the thermal sensor during daytime, the performance of the thermal sensor is significantly better for low visibility and shadow conditions, particularly for pedestrians and cyclists. Retraining the algorithm on thermal data yielded an improvement in the global accuracy of 48%. Thermal speed measurements were consistently more accurate than for the regular video at daytime and nighttime. Thermal video is insensitive to lighting interference and pavement temperature, solves issues associated with visible light cameras for traffic data collection, and offers other benefits such as privacy, insensitivity to glare, storage space, and lower processing requirements.

  17. An unsupervised method for summarizing egocentric sport videos

    Science.gov (United States)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    People are getting more interested to record their sport activities using head-worn or hand-held cameras. This type of videos which is called egocentric sport videos has different motion and appearance patterns compared with life-logging videos. While a life-logging video can be defined in terms of well-defined human-object interactions, notwithstanding, it is not trivial to describe egocentric sport videos using well-defined activities. For this reason, summarizing egocentric sport videos based on human-object interaction might fail to produce meaningful results. In this paper, we propose an unsupervised method for summarizing egocentric videos by identifying the key-frames of the video. Our method utilizes both appearance and motion information and it automatically finds the number of the key-frames. Our blind user study on the new dataset collected from YouTube shows that in 93:5% cases, the users choose the proposed method as their first video summary choice. In addition, our method is within the top 2 choices of the users in 99% of studies.

  18. Vibration control of a camera mount system for an unmanned aerial vehicle using piezostack actuators

    International Nuclear Information System (INIS)

    Oh, Jong-Seok; Choi, Seung-Bok; Han, Young-Min

    2011-01-01

    This work proposes an active mount for the camera systems of unmanned aerial vehicles (UAV) in order to control unwanted vibrations. An active actuator of the proposed mount is devised as an inertial type, in which a piezostack actuator is directly connected to the inertial mass. After evaluating the actuating force of the actuator, it is combined with the rubber element of the mount, whose natural frequency is determined based on the measured vibration characteristics of UAV. Based on the governing equations of motion of the active camera mount, a robust sliding mode controller (SMC) is then formulated with consideration of parameter uncertainties and hysteresis behavior of the actuator. Subsequently, vibration control performances of the proposed active mount are experimentally evaluated in the time and frequency domains. In addition, a full camera mount system of UAVs that is supported by four active mounts is considered and its vibration control performance is evaluated in the frequency domain using a hardware-in-the-loop simulation (HILS) method

  19. Active Video Game Exercise Training Improves the Clinical Control of Asthma in Children: Randomized Controlled Trial

    Science.gov (United States)

    Gomes, Evelim L. F. D.; Carvalho, Celso R. F.; Peixoto-Souza, Fabiana Sobral; Teixeira-Carvalho, Etiene Farah; Mendonça, Juliana Fernandes Barreto; Stirbulov, Roberto; Sampaio, Luciana Maria Malosá; Costa, Dirceu

    2015-01-01

    Objective The aim of the present study was to determine whether aerobic exercise involving an active video game system improved asthma control, airway inflammation and exercise capacity in children with moderate to severe asthma. Design A randomized, controlled, single-blinded clinical trial was carried out. Thirty-six children with moderate to severe asthma were randomly allocated to either a video game group (VGG; N = 20) or a treadmill group (TG; n = 16). Both groups completed an eight-week supervised program with two weekly 40-minute sessions. Pre-training and post-training evaluations involved the Asthma Control Questionnaire, exhaled nitric oxide levels (FeNO), maximum exercise testing (Bruce protocol) and lung function. Results No differences between the VGG and TG were found at the baseline. Improvements occurred in both groups with regard to asthma control and exercise capacity. Moreover, a significant reduction in FeNO was found in the VGG (p video game had a positive impact on children with asthma in terms of clinical control, improvementin their exercise capacity and a reductionin pulmonary inflammation. Trial Registration Clinicaltrials.gov NCT01438294 PMID:26301706

  20. Active Video Game Exercise Training Improves the Clinical Control of Asthma in Children: Randomized Controlled Trial.

    Science.gov (United States)

    Gomes, Evelim L F D; Carvalho, Celso R F; Peixoto-Souza, Fabiana Sobral; Teixeira-Carvalho, Etiene Farah; Mendonça, Juliana Fernandes Barreto; Stirbulov, Roberto; Sampaio, Luciana Maria Malosá; Costa, Dirceu

    2015-01-01

    The aim of the present study was to determine whether aerobic exercise involving an active video game system improved asthma control, airway inflammation and exercise capacity in children with moderate to severe asthma. A randomized, controlled, single-blinded clinical trial was carried out. Thirty-six children with moderate to severe asthma were randomly allocated to either a video game group (VGG; N = 20) or a treadmill group (TG; n = 16). Both groups completed an eight-week supervised program with two weekly 40-minute sessions. Pre-training and post-training evaluations involved the Asthma Control Questionnaire, exhaled nitric oxide levels (FeNO), maximum exercise testing (Bruce protocol) and lung function. No differences between the VGG and TG were found at the baseline. Improvements occurred in both groups with regard to asthma control and exercise capacity. Moreover, a significant reduction in FeNO was found in the VGG (p video game had a positive impact on children with asthma in terms of clinical control, improvement in their exercise capacity and a reduction in pulmonary inflammation. Clinicaltrials.gov NCT01438294.

  1. Effects of active video games on body composition: a randomized controlled trial.

    Science.gov (United States)

    Maddison, Ralph; Foley, Louise; Ni Mhurchu, Cliona; Jiang, Yannan; Jull, Andrew; Prapavessis, Harry; Hohepa, Maea; Rodgers, Anthony

    2011-07-01

    Sedentary activities such as video gaming are independently associated with obesity. Active video games, in which players physically interact with images on screen, may help increase physical activity and improve body composition. The aim of this study was to evaluate the effect of active video games over a 6-mo period on weight, body composition, physical activity, and physical fitness. We conducted a 2-arm, parallel, randomized controlled trial in Auckland, New Zealand. A total of 322 overweight and obese children aged 10-14 y, who were current users of sedentary video games, were randomly assigned at a 1:1 ratio to receive either an active video game upgrade package (intervention, n = 160) or to have no change (control group, n = 162). The primary outcome was the change from baseline in body mass index (BMI; in kg/m(2)). Secondary outcomes were changes in percentage body fat, physical activity, cardiorespiratory fitness, video game play, and food snacking. At 24 wk, the treatment effect on BMI (-0.24; 95% CI: -0.44, -0.05; P = 0.02) favored the intervention group. The change (±SE) in BMI from baseline increased in the control group (0.34 ± 0.08) but remained the same in the intervention group (0.09 ± 0.08). There was also evidence of a reduction in body fat in the intervention group (-0.83%; 95% CI: -1.54%, -0.12%; P = 0.02). The change in daily time spent playing active video games at 24 wk increased (10.03 min; 95% CI: 6.26, 13.81 min; P video games (-9.39 min; 95% CI: -19.38, 0.59 min; P = 0.06). An active video game intervention has a small but definite effect on BMI and body composition in overweight and obese children. This trial was registered in the Australian New Zealand Clinical Trials Registry at http://www.anzctr.org.au/ as ACTRN12607000632493.

  2. Neutral-beam performance analysis using a CCD camera

    International Nuclear Information System (INIS)

    Hill, D.N.; Allen, S.L.; Pincosy, P.A.

    1986-01-01

    We have developed an optical diagnostic system suitable for characterizing the performance of energetic neutral beams. An absolutely calibrated CCD video camera is used to view the neutral beam as it passes through a relatively high pressure (10 -5 Torr) region outside the neutralizer: collisional excitation of the fast deuterium atoms produces H/sub proportional to/ emission (lambda = 6561A) that is proportional to the local atomic current density, independent of the species mix of accelerated ions over the energy range 5 to 20 keV. Digital processing of the video signal provides profile and aiming information for beam optimization. 6 refs., 3 figs

  3. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras

    Science.gov (United States)

    Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.

    2017-08-01

    This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.

  4. HDR 192Ir source speed measurements using a high speed video camera

    International Nuclear Information System (INIS)

    Fonseca, Gabriel P.; Viana, Rodrigo S. S.; Yoriyaz, Hélio; Podesta, Mark; Rubo, Rodrigo A.; Sales, Camila P. de; Reniers, Brigitte; Verhaegen, Frank

    2015-01-01

    Purpose: The dose delivered with a HDR 192 Ir afterloader can be separated into a dwell component, and a transit component resulting from the source movement. The transit component is directly dependent on the source speed profile and it is the goal of this study to measure accurate source speed profiles. Methods: A high speed video camera was used to record the movement of a 192 Ir source (Nucletron, an Elekta company, Stockholm, Sweden) for interdwell distances of 0.25–5 cm with dwell times of 0.1, 1, and 2 s. Transit dose distributions were calculated using a Monte Carlo code simulating the source movement. Results: The source stops at each dwell position oscillating around the desired position for a duration up to (0.026 ± 0.005) s. The source speed profile shows variations between 0 and 81 cm/s with average speed of ∼33 cm/s for most of the interdwell distances. The source stops for up to (0.005 ± 0.001) s at nonprogrammed positions in between two programmed dwell positions. The dwell time correction applied by the manufacturer compensates the transit dose between the dwell positions leading to a maximum overdose of 41 mGy for the considered cases and assuming an air-kerma strength of 48 000 U. The transit dose component is not uniformly distributed leading to over and underdoses, which is within 1.4% for commonly prescribed doses (3–10 Gy). Conclusions: The source maintains its speed even for the short interdwell distances. Dose variations due to the transit dose component are much lower than the prescribed treatment doses for brachytherapy, although transit dose component should be evaluated individually for clinical cases

  5. Joystick-controlled video console game practice for developing power wheelchairs users' indoor driving skills.

    Science.gov (United States)

    Huang, Wei Pin; Wang, Chia Cheng; Hung, Jo Hua; Chien, Kai Chun; Liu, Wen-Yu; Cheng, Chih-Hsiu; Ng, How-Hing; Lin, Yang-Hua

    2015-02-01

    [Purpose] This study aimed to determine the effectiveness of joystick-controlled video console games in enhancing subjects' ability to control power wheelchairs. [Subjects and Methods] Twenty healthy young adults without prior experience of driving power wheelchairs were recruited. Four commercially available video games were used as training programs to practice joystick control in catching falling objects, crossing a river, tracing the route while floating on a river, and navigating through a garden maze. An indoor power wheelchair driving test, including straight lines, and right and left turns, was completed before and after the video game practice, during which electromyographic signals of the upper limbs were recorded. The paired t-test was used to compare the differences in driving performance and muscle activities before and after the intervention. [Results] Following the video game intervention, participants took significantly less time to complete the course, with less lateral deviation when turning the indoor power wheelchair. However, muscle activation in the upper limbs was not significantly affected. [Conclusion] This study demonstrates the feasibility of using joystick-controlled commercial video games to train individuals in the control of indoor power wheelchairs.

  6. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    Science.gov (United States)

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  7. Piloting the feasibility of head-mounted video technology to augment student feedback during simulated clinical decision-making: An observational design pilot study.

    Science.gov (United States)

    Forbes, Helen; Bucknall, Tracey K; Hutchinson, Alison M

    2016-04-01

    Clinical decision-making is a complex activity that is critical to patient safety. Simulation, augmented by feedback, affords learners the opportunity to learn critical clinical decision-making skills. More detailed feedback following simulation exercises has the potential to further enhance student learning, particularly in relation to developing improved clinical decision-making skills. To investigate the feasibility of head-mounted video camera recordings, to augment feedback, following acute patient deterioration simulations. Pilot study using an observational design. Ten final-year nursing students participated in three simulation exercises, each focussed on detection and management of patient deterioration. Two observers collected behavioural data using an adapted version of Gaba's Clinical Simulation Tool, to provide verbal feedback to each participant, following each simulation exercise. Participants wore a head-mounted video camera during the second simulation exercise only. Video recordings were replayed to participants to augment feedback, following the second simulation exercise. Data were collected on: participant performance (observed and perceived); participant perceptions of feedback methods; and head-mounted video camera recording feasibility and capability for detailed audio-visual feedback. Management of patient deterioration improved for six participants (60%). Increased perceptions of confidence (70%) and competence (80%), were reported by the majority of participants. Few participants (20%) agreed that the video recording specifically enhanced their learning. The visual field of the head-mounted video camera was not always synchronised with the participant's field of vision, thus affecting the usefulness of some recordings. The usefulness of the video recordings, to enhance verbal feedback to participants on detection and management of simulated patient deterioration, was inconclusive. Modification of the video camera glasses, to improve

  8. Retail video analytics: an overview and survey

    Science.gov (United States)

    Connell, Jonathan; Fan, Quanfu; Gabbur, Prasad; Haas, Norman; Pankanti, Sharath; Trinh, Hoang

    2013-03-01

    Today retail video analytics has gone beyond the traditional domain of security and loss prevention by providing retailers insightful business intelligence such as store traffic statistics and queue data. Such information allows for enhanced customer experience, optimized store performance, reduced operational costs, and ultimately higher profitability. This paper gives an overview of various camera-based applications in retail as well as the state-ofthe- art computer vision techniques behind them. It also presents some of the promising technical directions for exploration in retail video analytics.

  9. Quality control of the gamma camera/computer interface

    International Nuclear Information System (INIS)

    Busemann-Sokole, E.

    1983-01-01

    Reporting on the conference mentioned, the author indicates that technical inspection of the gamma camera and the attached computer each by themselves is not sufficient. The parts of the interface and the hardware or software can contain sources of error. In order to obtain the best diagnostic image a number of control measurements are recommended dealing with image intensifying, intensifier offset, linearity of transformation, exclusion of 'data drop' or 'bit drop', 2-pulse timing, correct response with different counting rates, and response to triggers (electrocardiogram). The last and most important recommendation is to record in writing particulars of each inspection and control measurement, particulars and solutions of problems and modifications in hardware and software. (Auth.)

  10. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  11. Development of intelligent control system for X-ray streak camera in diagnostic instrument manipulator

    International Nuclear Information System (INIS)

    Pei, Chengquan; Wu, Shengli; Tian, Jinshou; Liu, Zhen; Fang, Yuman; Gao, Guilong; Liang, Lingliang; Wen, Wenlong

    2015-01-01

    An intelligent control system for an X ray streak camera in a diagnostic instrument manipulator (DIM) is proposed and implemented, which can control time delay, electric focusing, image gain adjustment, switch of sweep voltage, acquiring environment parameters etc. The system consists of 16 A/D converters and 16 D/A converters, a 32-channel general purpose input/output (GPIO) and two sensors. An isolated DC/DC converter with multi-outputs and a single mode fiber were adopted to reduce the interference generated by the common ground among the A/D, D/A and I/O. The software was designed using graphical programming language and can remotely access the corresponding instrument from a website. The entire intelligent control system can acquire the desirable data at a speed of 30 Mb/s and store it for later analysis. The intelligent system was implemented on a streak camera in a DIM and it shows a temporal resolution of 11.25 ps, spatial distortion of less than 10% and dynamic range of 279:1. The intelligent control system has been successfully used in a streak camera to verify the synchronization of multi-channel laser on the Inertial Confinement Fusion Facility

  12. Development of intelligent control system for X-ray streak camera in diagnostic instrument manipulator

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Chengquan [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Wu, Shengli, E-mail: slwu@mail.xjtu.edu.cn [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Tian, Jinshou [Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Liu, Zhen [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Fang, Yuman [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Gao, Guilong; Liang, Lingliang [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Wen, Wenlong [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-11-01

    An intelligent control system for an X ray streak camera in a diagnostic instrument manipulator (DIM) is proposed and implemented, which can control time delay, electric focusing, image gain adjustment, switch of sweep voltage, acquiring environment parameters etc. The system consists of 16 A/D converters and 16 D/A converters, a 32-channel general purpose input/output (GPIO) and two sensors. An isolated DC/DC converter with multi-outputs and a single mode fiber were adopted to reduce the interference generated by the common ground among the A/D, D/A and I/O. The software was designed using graphical programming language and can remotely access the corresponding instrument from a website. The entire intelligent control system can acquire the desirable data at a speed of 30 Mb/s and store it for later analysis. The intelligent system was implemented on a streak camera in a DIM and it shows a temporal resolution of 11.25 ps, spatial distortion of less than 10% and dynamic range of 279:1. The intelligent control system has been successfully used in a streak camera to verify the synchronization of multi-channel laser on the Inertial Confinement Fusion Facility.

  13. An electronic pan/tilt/magnify and rotate camera system

    International Nuclear Information System (INIS)

    Zimmermann, S.; Martin, H.L.

    1992-01-01

    A new camera system has been developed for omnidirectional image-viewing applications that provides pan, tilt, magnify, and rotational orientation within a hemispherical field of view (FOV) without any moving parts. The imaging device is based on the fact that the image from a fish-eye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high-speed electronic circuitry. More specifically, an incoming fish-eye image from any image acquisition source is captured in the memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment. As a result, this device can accomplish the functions of pan, tilt, rotation, and magnification throughout a hemispherical FOV without the need for any mechanical devices. Multiple images, each with different image magnifications and pan-tilt-rotate parameters, can be obtained from a single camera

  14. Improved embedded non-linear processing of video for camera surveillance

    NARCIS (Netherlands)

    Cvetkovic, S.D.; With, de P.H.N.

    2009-01-01

    For a real time imaging in surveillance applications, image fidelity is of primary importance to ensure customer confidence. The fidelity is obtained amongst others via dynamic range expansion and video signal enhancement. The dynamic range of the signal needs adaptation, because the sensor signal

  15. Video repairing under variable illumination using cyclic motions.

    Science.gov (United States)

    Jia, Jiaya; Tai, Yu-Wing; Wu, Tai-Pang; Tang, Chi-Keung

    2006-05-01

    This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.

  16. Risk analysis of a video-surveillance system

    NARCIS (Netherlands)

    Rothkrantz, L.; Lefter, I.

    2011-01-01

    The paper describes a surveillance system of cameras installed at lamppost of a military area. The surveillance system has been designed to detect unwanted visitors or suspicious behaviors. The area is composed of streets, building blocks and surrounded by gates and water. The video recordings are

  17. Distributed Video Coding for Multiview and Video-plus-depth Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo

    The interest in Distributed Video Coding (DVC) systems has grown considerably in the academic world in recent years. With DVC the correlation between frames is exploited at the decoder (joint decoding). The encoder codes the frame independently, performing relatively simple operations. Therefore......, with DVC the complexity is shifted from encoder to decoder, making the coding architecture a viable solution for encoders with limited resources. DVC may empower new applications which can benefit from this reversed coding architecture. Multiview Distributed Video Coding (M-DVC) is the application...... of the to-be-decoded frame. Another key element is the Residual estimation, indicating the reliability of the SI, which is used to calculate the parameters of the correlation noise model between SI and original frame. In this thesis new methods for Inter-camera SI generation are analyzed in the Stereo...

  18. Cross-Layer QoS Control for Video Communications over Wireless Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Pei Yong

    2005-01-01

    Full Text Available Assuming a wireless ad hoc network consisting of homogeneous video users with each of them also serving as a possible relay node for other users, we propose a cross-layer rate-control scheme based on an analytical study of how the effective video transmission rate is affected by the prevailing operating parameters, such as the interference environment, the number of transmission hops to a destination, and the packet loss rate. Furthermore, in order to provide error-resilient video delivery over such wireless ad hoc networks, a cross-layer joint source-channel coding (JSCC approach, to be used in conjunction with rate-control, is proposed and investigated. This approach attempts to optimally apply the appropriate channel coding rate given the constraints imposed by the effective transmission rate obtained from the proposed rate-control scheme, the allowable real-time video play-out delay, and the prevailing channel conditions. Simulation results are provided which demonstrate the effectiveness of the proposed cross-layer combined rate-control and JSCC approach.

  19. Quality control of radiosurgery: dosimetry with micro camera in spherical mannequin

    International Nuclear Information System (INIS)

    Casado Villalon, F. J.; Navarro Guirado, F.; Garci Pareja, S.; Benitez Villegas, E. M.; Galan Montenegro, P.; Moreno Saiz, C.

    2013-01-01

    The dosimetry of small field is part of quality control in the treatment of cranial radiosurgery. In this work the results of absorbed dose in the isocenter, Planner, with those obtained from are compared experimentally with a micro-camera into an spherical mannequin. (Author)

  20. Inexpensive remote video surveillance system with microcomputer and solar cells

    International Nuclear Information System (INIS)

    Guevara Betancourt, Edder

    2013-01-01

    A low-cost prototype is developed with a RPI plate for remote video surveillance. Additionally, the theoretical basis to provide energy independence have developed through solar cells and a battery bank. Some existing commercial monitoring systems are studied and analyzed, components such as: cameras, communication devices (WiFi and 3G), free software packages for video surveillance, control mechanisms and theory remote photovoltaic systems. A number of steps are developed to implement the module and install, configure and test each of the elements of hardware and software that make up the module, exploring the feasibility of providing intelligence to the system using the software chosen. Events that have been generated by motion detection have been simple, intuitive way to view, archive and extract. The implementation of the module by a microcomputer video surveillance and motion detection software (Zoneminder) has been an option for a lot of potential; as the platform for monitoring and recording data has provided all the tools to make a robust and secure surveillance. (author) [es

  1. Online Tracking of Outdoor Lighting Variations for Augmented Reality with Moving Cameras

    OpenAIRE

    Liu , Yanli; Granier , Xavier

    2012-01-01

    International audience; In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key ide...

  2. The Camera-Based Assessment Survey System (C-BASS): A towed camera platform for reef fish abundance surveys and benthic habitat characterization in the Gulf of Mexico

    Science.gov (United States)

    Lembke, Chad; Grasty, Sarah; Silverman, Alex; Broadbent, Heather; Butcher, Steven; Murawski, Steven

    2017-12-01

    An ongoing challenge for fisheries management is to provide cost-effective and timely estimates of habitat stratified fish densities. Traditional approaches use modified commercial fishing gear (such as trawls and baited hooks) that have biases in species selectivity and may also be inappropriate for deployment in some habitat types. Underwater visual and optical approaches offer the promise of more precise and less biased assessments of relative fish abundance, as well as direct estimates of absolute fish abundance. A number of video-based approaches have been developed and the technology for data acquisition, calibration, and synthesis has been developing rapidly. Beginning in 2012, our group of engineers and researchers at the University of South Florida has been working towards the goal of completing large scale, video-based surveys in the eastern Gulf of Mexico. This paper discusses design considerations and development of a towed camera system for collection of video-based data on commercially and recreationally important reef fishes and benthic habitat on the West Florida Shelf. Factors considered during development included potential habitat types to be assessed, sea-floor bathymetry, vessel support requirements, personnel requirements, and cost-effectiveness of system components. This regional-specific effort has resulted in a towed platform called the Camera-Based Assessment Survey System, or C-BASS, which has proven capable of surveying tens of kilometers of video transects per day and has the ability to cost-effective population estimates of reef fishes and coincident benthic habitat classification.

  3. CameraHRV: robust measurement of heart rate variability using a camera

    Science.gov (United States)

    Pai, Amruta; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2018-02-01

    The inter-beat-interval (time period of the cardiac cycle) changes slightly for every heartbeat; this variation is measured as Heart Rate Variability (HRV). HRV is presumed to occur due to interactions between the parasym- pathetic and sympathetic nervous system. Therefore, it is sometimes used as an indicator of the stress level of an individual. HRV also reveals some clinical information about cardiac health. Currently, HRV is accurately measured using contact devices such as a pulse oximeter. However, recent research in the field of non-contact imaging Photoplethysmography (iPPG) has made vital sign measurements using just the video recording of any exposed skin (such as a person's face) possible. The current signal processing methods for extracting HRV using peak detection perform well for contact-based systems but have poor performance for the iPPG signals. The main reason for this poor performance is the fact that current methods are sensitive to large noise sources which are often present in iPPG data. Further, current methods are not robust to motion artifacts that are common in iPPG systems. We developed a new algorithm, CameraHRV, for robustly extracting HRV even in low SNR such as is common with iPPG recordings. CameraHRV combined spatial combination and frequency demodulation to obtain HRV from the instantaneous frequency of the iPPG signal. CameraHRV outperforms other current methods of HRV estimation. Ground truth data was obtained from FDA-approved pulse oximeter for validation purposes. CameraHRV on iPPG data showed an error of 6 milliseconds for low motion and varying skin tone scenarios. The improvement in error was 14%. In case of high motion scenarios like reading, watching and talking, the error was 10 milliseconds.

  4. Human recognition at a distance in video

    CERN Document Server

    Bhanu, Bir

    2010-01-01

    Most biometric systems employed for human recognition require physical contact with, or close proximity to, a cooperative subject. Far more challenging is the ability to reliably recognize individuals at a distance, when viewed from an arbitrary angle under real-world environmental conditions. Gait and face data are the two biometrics that can be most easily captured from a distance using a video camera. This comprehensive and logically organized text/reference addresses the fundamental problems associated with gait and face-based human recognition, from color and infrared video data that are

  5. Motion video analysis using planar parallax

    Science.gov (United States)

    Sawhney, Harpreet S.

    1994-04-01

    Motion and structure analysis in video sequences can lead to efficient descriptions of objects and their motions. Interesting events in videos can be detected using such an analysis--for instance independent object motion when the camera itself is moving, figure-ground segregation based on the saliency of a structure compared to its surroundings. In this paper we present a method for 3D motion and structure analysis that uses a planar surface in the environment as a reference coordinate system to describe a video sequence. The motion in the video sequence is described as the motion of the reference plane, and the parallax motion of all the non-planar components of the scene. It is shown how this method simplifies the otherwise hard general 3D motion analysis problem. In addition, a natural coordinate system in the environment is used to describe the scene which can simplify motion based segmentation. This work is a part of an ongoing effort in our group towards video annotation and analysis for indexing and retrieval. Results from a demonstration system being developed are presented.

  6. Comparison of cardiopulmonary resuscitation techniques using video camera recordings.

    OpenAIRE

    Mann, C J; Heyworth, J

    1996-01-01

    OBJECTIVE--To use video recordings to compare the performance of resuscitation teams in relation to their previous training in cardiac resuscitation. METHODS--Over a 10 month period all cardiopulmonary resuscitations carried out in an accident and emergency (A&E) resuscitation room were videotaped. The following variables were monitored: (1) time to perform three defibrillatory shocks; (2) time to give intravenous adrenaline (centrally or peripherally); (3) the numbers and grade of medical an...

  7. The photothermal camera - a new non destructive inspection tool; La camera photothermique - une nouvelle methode de controle non destructif

    Energy Technology Data Exchange (ETDEWEB)

    Piriou, M. [AREVA NP Centre Technique SFE - Zone Industrielle et Portuaire Sud - BP13 - 71380 Saint Marcel (France)

    2007-07-01

    The Photothermal Camera, developed by the Non-Destructive Inspection Department at AREVA NP's Technical Center, is a device created to replace penetrant testing, a method whose drawbacks include environmental pollutants, industrial complexity and potential operator exposure. We have already seen how the Photothermal Camera can work alongside or instead of conventional surface inspection techniques such as penetrant, magnetic particle or eddy currents. With it, users can detect without any surface contact ligament defects or openings measuring just a few microns on rough oxidized, machined or welded metal parts. It also enables them to work on geometrically varied surfaces, hot parts or insulating (dielectric) materials without interference from the magnetic properties of the inspected part. The Photothermal Camera method has already been used for in situ inspections of tube/plate welds on an intermediate heat exchanger of the Phenix fast reactor. It also replaced the penetrant method for weld inspections on the ITER vacuum chamber, for weld crack detection on vessel head adapter J-welds, and for detecting cracks brought on by heat crazing. What sets this innovative method apart from others is its ability to operate at distances of up to two meters from the inspected part, as well as its remote control functionality at distances of up to 15 meters (or more via Ethernet), and its emissions-free environmental cleanliness. These make it a true alternative to penetrant testing, to the benefit of operator and environmental protection. (author) [French] La Camera Photothermique, developpee par le departement des Examens Non Destructifs du Centre Technique de AREVA NP, est un equipement destine a remplacer le ressuage, source de pollution pour l'environnement, de complexite pour l'industrialisation et eventuellement de dosimetrie pour les operateurs. Il a ete demontre que la Camera Photothermique peut etre utilisee en complement ou en remplacement des

  8. MovieRemix: Having Fun Playing with Videos

    Directory of Open Access Journals (Sweden)

    Nicola Dusi

    2011-01-01

    scenario. Known as remix or video remix, the produced video may have new and different meanings with respect to the source material. Unfortunately, when managing audiovisual objects, the technological aspect can be a burden for many creative users. Motivated by the large success of the gaming market, we propose a novel game and an architecture to make the remix process a pleasant and stimulating gaming experience. MovieRemix allows people to act like a movie director, but instead of dealing with cast and cameras, the player has to create a remixed video starting from a given screenplay and from video shots retrieved from the provided catalog. MovieRemix is not a simple video editing tool nor is a simple game: it is a challenging environment that stimulates creativity. To temp to play the game, players can access different levels of screenplay (original, outline, derived and can also challenge other players. Computational and storage issues are kept at the server side, whereas the client device just needs to have the capability of playing streaming videos.

  9. Satellite markers: a simple method for ground truth car pose on stereo video

    Science.gov (United States)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-04-01

    Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.

  10. A simple and accurate method for the quality control of the I.I.-DR apparatus using the CCD camera

    International Nuclear Information System (INIS)

    Igarashi, Hitoshi; Shiraishi, Akihisa; Kuraishi, Masahiko

    2000-01-01

    With the advancing development of CCD cameras, the I.I.-DR apparatus has been introduced into the x-ray fluoroscopy television system. Consequently, quality control of the system has become a complicated task. We developed a simple, accurate method for quality control of the I.I.-DR apparatus using the CCD camera. Experiments were separately performed for the imager system [laser imager, DDX (dynamic digital x-ray system)] and the imaging system (I.I., ND-filter, IRIS, CCD camera). Quality control of the imager system was done by simply examining both input and output characteristics with a sliding pattern. Quality control of the imaging system was also conducted by estimating AVE (the average volume element), which was obtained using a phantom under the constant conditions. The results indicated that this simplified method is useful as a weekly quality control check of the I.I.-DR apparatus using the CCD camera. (author)

  11. A framework for multi-object tracking over distributed wireless camera networks

    Science.gov (United States)

    Gau, Victor; Hwang, Jenq-Neng

    2010-07-01

    In this paper, we propose a unified framework targeting at two important issues in a distributed wireless camera network, i.e., object tracking and network communication, to achieve reliable multi-object tracking over distributed wireless camera networks. In the object tracking part, we propose a fully automated approach for tracking of multiple objects across multiple cameras with overlapping and non-overlapping field of views without initial training. To effectively exchange the tracking information among the distributed cameras, we proposed an idle probability based broadcasting method, iPro, which adaptively adjusts the broadcast probability to improve the broadcast effectiveness in a dense saturated camera network. Experimental results for the multi-object tracking demonstrate the promising performance of our approach on real video sequences for cameras with overlapping and non-overlapping views. The modeling and ns-2 simulation results show that iPro almost approaches the theoretical performance upper bound if cameras are within each other's transmission range. In more general scenarios, e.g., in case of hidden node problems, the simulation results show that iPro significantly outperforms standard IEEE 802.11, especially when the number of competing nodes increases.

  12. A Novel High Efficiency Fractal Multiview Video Codec

    Directory of Open Access Journals (Sweden)

    Shiping Zhu

    2015-01-01

    Full Text Available Multiview video which is one of the main types of three-dimensional (3D video signals, captured by a set of video cameras from various viewpoints, has attracted much interest recently. Data compression for multiview video has become a major issue. In this paper, a novel high efficiency fractal multiview video codec is proposed. Firstly, intraframe algorithm based on the H.264/AVC intraprediction modes and combining fractal and motion compensation (CFMC algorithm in which range blocks are predicted by domain blocks in the previously decoded frame using translational motion with gray value transformation is proposed for compressing the anchor viewpoint video. Then temporal-spatial prediction structure and fast disparity estimation algorithm exploiting parallax distribution constraints are designed to compress the multiview video data. The proposed fractal multiview video codec can exploit temporal and spatial correlations adequately. Experimental results show that it can obtain about 0.36 dB increase in the decoding quality and 36.21% decrease in encoding bitrate compared with JMVC8.5, and the encoding time is saved by 95.71%. The rate-distortion comparisons with other multiview video coding methods also demonstrate the superiority of the proposed scheme.

  13. Parent-child relationships and self‑control in male university students' desire to play video games.

    Science.gov (United States)

    Karbasizadeh, Sina; Jani, Masih; Keshvari, Mahtab

    2018-06-12

    To determine the relationship between the parent-child relationship, self-control and demographic characteristics and the desire to play video games among male university students at one university in Iran. This was a correlational, descriptive, applied study. A total of 103 male students were selected randomly as a study sample from the population of male students at Isfahan University in Iran. Data collection tools used were the Video Games Questionnaire, Tanji's Self-Control Scale, Parent-Child Relationship Questionnaire, and Demographic Questionnaire. Data were analysed using stepwise regression analysis. This study found several factors increased male students' desire to play video games. Demographic characteristics associated with increased tendency to play video games among male students in Iran are older age, larger number of family members, lower parental level of education and higher socio-economic class, while other significant factors are a lower level of self‑control and a poorer parent-child relationship. PARTICIPANTS': higher socio-economic class, lower level of self-control and older age explained 8.2%, 5.2% and 5.9% of their desire to play video games, respectively. These three variables together accounted for significantly 16.9% of a male student's desire to play video games in this study ( P video games in Iran. Moreover, lower levels of self-control and a poorer parent-child relationship were found to be accompanied by a greater desire to play video games among male university students. © 2018 RCN Publishing Company Ltd. All rights reserved. Not to be copied, transmitted or recorded in any way, in whole or part, without prior permission of the publishers.

  14. DV or Not DV: That is the Question When Producing Video for the Internet

    Directory of Open Access Journals (Sweden)

    Gregory Gutenko

    2002-06-01

    Full Text Available Pervasive advertising and marketing efforts promote consumer market digital video (DV format camcorders as the ideal acquisition technology for Internet video production. A shared digital nature and simple interconnections between cameras and computers using IEEE1394/FireWire/i.Link® format digital cabling do suggest a natural affinity between DV and Internet production. However, experience with Web delivery reveals numerous obstacles associated with consumer-friendly camcorder design and feature sets that can severely compromise Internet streaming video quality. This paper describes the various features in consumer and industrial grade DV camcorder designs that lead to unnecessary image quality loss, and what must be done to avoid such loss. Conventional video production techniques that can lead to quality loss regardless of the camera technology used are also identified. A number of recommendations are offered that will help the videographer in an educational or production environment adapt to Internet limitations.

  15. Collaborative web-based annotation of video footage of deep-sea life, ecosystems and geological processes

    Science.gov (United States)

    Kottmann, R.; Ratmeyer, V.; Pop Ristov, A.; Boetius, A.

    2012-04-01

    More and more seagoing scientific expeditions use video-controlled research platforms such as Remote Operating Vehicles (ROV), Autonomous Underwater Vehicles (AUV), and towed camera systems. These produce many hours of video material which contains detailed and scientifically highly valuable footage of the biological, chemical, geological, and physical aspects of the oceans. Many of the videos contain unique observations of unknown life-forms which are rare, and which cannot be sampled and studied otherwise. To make such video material online accessible and to create a collaborative annotation environment the "Video Annotation and processing platform" (V-App) was developed. A first solely web-based installation for ROV videos is setup at the German Center for Marine Environmental Sciences (available at http://videolib.marum.de). It allows users to search and watch videos with a standard web browser based on the HTML5 standard. Moreover, V-App implements social web technologies allowing a distributed world-wide scientific community to collaboratively annotate videos anywhere at any time. It has several features fully implemented among which are: • User login system for fine grained permission and access controlVideo watching • Video search using keywords, geographic position, depth and time range and any combination thereof • Video annotation organised in themes (tracks) such as biology and geology among others in standard or full screen mode • Annotation keyword management: Administrative users can add, delete, and update single keywords for annotation or upload sets of keywords from Excel-sheets • Download of products for scientific use This unique web application system helps making costly ROV videos online available (estimated cost range between 5.000 - 10.000 Euros per hour depending on the combination of ship and ROV). Moreover, with this system each expert annotation adds instantaneous available and valuable knowledge to otherwise uncharted

  16. Storage, access, and retrieval of endoscopic and laparoscopic video

    Science.gov (United States)

    Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.

    1999-05-01

    The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video into DICOM3.0. Digital stereoscopic video sequences (DSVS) are especially in demand for surgery (laparoscopy, microsurgery, surgical microscopy, second opinion, virtual reality). Therefore DSVS are also integrated into the DICOM video concept. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital (stereoscopic) video sequences relevant for surgery should be examined regarding the clip length necessary for diagnosis and documentation and the clip size manageable with today's hardware. Methods for DSVS compression are described, implemented, and tested. Image sources relevant for this paper include, among others, a stereoscopic laparoscope and a monoscopic endoscope. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video- cutting.

  17. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    OpenAIRE

    Steven Nicholas Graves, MA; Deana Saleh Shenaq, MD; Alexander J. Langerman, MD; David H. Song, MD, MBA

    2015-01-01

    Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons? point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon?s perspective using the GoPro App. The camera was used to ...

  18. The Future of Video

    OpenAIRE

    Li, F.

    2016-01-01

    Executive Summary \\ud \\ud A range of technological innovations (e.g. smart phones and digital cameras), infrastructural advances (e.g. broadband and 3G/4G wireless networks) and platform developments (e.g. YouTube, Facebook, Snapchat, Instagram, Amazon, and Netflix) are collectively transforming the way video is produced, distributed, consumed, archived – and importantly, monetised. Changes have been observed well beyond the mainstream TV and film industries, and these changes are increasingl...

  19. Person and gesture tracking with smart stereo cameras

    Science.gov (United States)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems

  20. POINT CLOUD DERIVED FROMVIDEO FRAMES: ACCURACY ASSESSMENT IN RELATION TO TERRESTRIAL LASER SCANNINGAND DIGITAL CAMERA DATA

    Directory of Open Access Journals (Sweden)

    P. Delis

    2017-02-01

    Full Text Available The use of image sequences in the form of video frames recorded on data storage is very useful in especially when working with large and complex structures. Two cameras were used in this study: Sony NEX-5N (for the test object and Sony NEX-VG10 E (for the historic building. In both cases, a Sony α f = 16 mm fixed focus wide-angle lens was used. Single frames with sufficient overlap were selected from the video sequence using an equation for automatic frame selection. In order to improve the quality of the generated point clouds, each video frame underwent histogram equalization and image sharpening. Point clouds were generated from the video frames using the SGM-like image matching algorithm. The accuracy assessment was based on two reference point clouds: the first from terrestrial laser scanning and the second generated based on images acquired using a high resolution camera, the NIKON D800. The performed research has shown, that highest accuracies are obtained for point clouds generated from video frames, for which a high pass filtration and histogram equalization had been performed. Studies have shown that to obtain a point cloud density comparable to TLS, an overlap between subsequent video frames must be 85 % or more. Based on the point cloud generated from video data, a parametric 3D model can be generated. This type of the 3D model can be used in HBIM construction.

  1. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    Science.gov (United States)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  2. EFFICIENT USE OF VIDEO FOR 3D MODELLING OF CULTURAL HERITAGE OBJECTS

    Directory of Open Access Journals (Sweden)

    B. Alsadik

    2015-03-01

    Full Text Available Currently, there is a rapid development in the techniques of the automated image based modelling (IBM, especially in advanced structure-from-motion (SFM and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 – 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  3. Optimum color filters for CCD digital cameras

    Science.gov (United States)

    Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl

    1993-12-01

    As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.

  4. The development of a reliable multi-camera multiplexed CCTV system for safeguards surveillance

    International Nuclear Information System (INIS)

    Gilbert, R.S.; Chiang, K.S.

    1986-01-01

    The background, requirements and system details for a simple reliable Closed Circuit Television (CCTV) system are described. The design of the system presented allows up to 8 CCTV cameras of different makes to be multiplexed and their out-put recorded by three Time Lapse Recorders (TLRs) operating in parallel. This multiplex or MUX-CCTV system is intended to be used by the IAEA for surveillance at several nuclear facilities. The system is unique in that it allows all of the cameras to be operated asynchronously and it provides high quality video during replay. It also incorporates video event counting logic which enables IAEA inspectors to take a very quick inventory of the events which are recorded during unattended operation. This paper discusses other phases of the development for the system and it presents some speculation about future changes which may enhance performance

  5. Distributed Autonomous Control of Multiple Spacecraft During Close Proximity Operations

    Science.gov (United States)

    2007-12-01

    Neubauer [54][55]. 87 VII. LQR/APF CONTROL ALGORITHM APPROACH The LQR approach can be recursively applied to the multiple spacecraft close... Neubauer and Swartwout’s research [55]. It is generally possible to select a closed map over which the algorithm is stable and robust. For these...can be easily edited and transferred into video format for presentations. Modifications of camera key frames ( camera position and angle) and

  6. Final Report for LDRD Project 02-FS-009 Gigapixel Surveillance Camera

    Energy Technology Data Exchange (ETDEWEB)

    Marrs, R E; Bennett, C L

    2010-04-20

    The threats of terrorism and proliferation of weapons of mass destruction add urgency to the development of new techniques for surveillance and intelligence collection. For example, the United States faces a serious and growing threat from adversaries who locate key facilities underground, hide them within other facilities, or otherwise conceal their location and function. Reconnaissance photographs are one of the most important tools for uncovering the capabilities of adversaries. However, current imaging technology provides only infrequent static images of a large area, or occasional video of a small area. We are attempting to add a new dimension to reconnaissance by introducing a capability for large area video surveillance. This capability would enable tracking of all vehicle movements within a very large area. The goal of our project is the development of a gigapixel video surveillance camera for high altitude aircraft or balloon platforms. From very high altitude platforms (20-40 km altitude) it would be possible to track every moving vehicle within an area of roughly 100 km x 100 km, about the size of the San Francisco Bay region, with a gigapixel camera. Reliable tracking of vehicles requires a ground sampling distance (GSD) of 0.5 to 1 m and a framing rate of approximately two frames per second (fps). For a 100 km x 100 km area the corresponding pixel count is 10 gigapixels for a 1-m GSD and 40 gigapixels for a 0.5-m GSD. This is an order of magnitude beyond the 1 gigapixel camera envisioned in our LDRD proposal. We have determined that an instrument of this capacity is feasible.

  7. Value Added: the Case for Point-of-View Camera use in Orthopedic Surgical Education.

    Science.gov (United States)

    Karam, Matthew D; Thomas, Geb W; Taylor, Leah; Liu, Xiaoxing; Anthony, Chris A; Anderson, Donald D

    2016-01-01

    Orthopedic surgical education is evolving as educators search for new ways to enhance surgical skills training. Orthopedic educators should seek new methods and technologies to augment and add value to real-time orthopedic surgical experience. This paper describes a protocol whereby we have started to capture and evaluate specific orthopedic milestone procedures with a GoPro® point-of-view video camera and a dedicated video reviewing website as a way of supplementing the current paradigm in surgical skills training. We report our experience regarding the details and feasibility of this protocol. Upon identification of a patient undergoing surgical fixation of a hip or ankle fracture, an orthopedic resident places a GoPro® point-of-view camera on his or her forehead. All fluoroscopic images acquired during the case are saved and later incorporated into a video on the reviewing website. Surgical videos are uploaded to a secure server and are accessible for later review and assessment via a custom-built website. An electronic survey of resident participants was performed utilizing Qualtrics software. Results are reported using descriptive statistics. A total of 51 surgical videos involving 23 different residents have been captured to date. This includes 20 intertrochanteric hip fracture cases and 31 ankle fracture cases. The average duration of each surgical video was 1 hour and 16 minutes (range 40 minutes to 2 hours and 19 minutes). Of 24 orthopedic resident surgeons surveyed, 88% thought capturing a video portfolio of orthopedic milestones would benefit their education. There is a growing demand in orthopedic surgical education to extract more value from each surgical experience. While further work in development and refinement of such assessments is necessary, we feel that intraoperative video, particularly when captured and presented in a non-threatening, user friendly manner, can add significant value to the present and future paradigm of orthopedic surgical

  8. A Projector-Camera System for Augmented Card Playing and a Case Study with the Pelmanism Game

    Directory of Open Access Journals (Sweden)

    Nozomu Tanaka

    2017-05-01

    Full Text Available In this article, we propose a system for augmented card playing with a projector and a camera to add playfulness and increase communication among players of a traditional card game. The functionalities were derived on the basis of a user survey session with actual players. Playing cards are recognized using a video camera on the basis of a template matching without any artificial marker with an accuracy of > 0.96. Players are also tracked to provide person-dependent services using a video camera from the direction of their hands appearing over a table. These functions are provided as an API; therefore, the user of our system, i.e., a developer, can easily augment playing card games. The Pelmanism game was augmented on top of the system to validate the concept of augmentation. The results showed the feasibility of the system’s performance in an actual environment and the potential of enhancing playfulness and communication among players.

  9. Super Normal Vector for Human Activity Recognition with Depth Cameras.

    Science.gov (United States)

    Yang, Xiaodong; Tian, YingLi

    2017-05-01

    The advent of cost-effectiveness and easy-operation depth cameras has facilitated a variety of visual recognition tasks including human activity recognition. This paper presents a novel framework for recognizing human activities from video sequences captured by depth cameras. We extend the surface normal to polynormal by assembling local neighboring hypersurface normals from a depth sequence to jointly characterize local motion and shape information. We then propose a general scheme of super normal vector (SNV) to aggregate the low-level polynormals into a discriminative representation, which can be viewed as a simplified version of the Fisher kernel representation. In order to globally capture the spatial layout and temporal order, an adaptive spatio-temporal pyramid is introduced to subdivide a depth video into a set of space-time cells. In the extensive experiments, the proposed approach achieves superior performance to the state-of-the-art methods on the four public benchmark datasets, i.e., MSRAction3D, MSRDailyActivity3D, MSRGesture3D, and MSRActionPairs3D.

  10. Real-Time Range Sensing Video Camera for Human/Robot Interfacing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In comparison to stereovision, it is well known that structured-light illumination has distinct advantages including the use of only one camera, being significantly...

  11. Applied learning-based color tone mapping for face recognition in video surveillance system

    Science.gov (United States)

    Yew, Chuu Tian; Suandi, Shahrel Azmin

    2012-04-01

    In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.

  12. Diagnostics and camera strobe timers for hydrogen pellet injectors

    International Nuclear Information System (INIS)

    Bauer, M.L.; Fisher, P.W.; Qualls, A.L.

    1993-01-01

    Hydrogen pellet injectors have been used to fuel fusion experimental devices for the last decade. As part of developments to improve pellet production and velocity, various diagnostic devices were implemented, ranging from witness plates to microwave mass meters to high speed photography. This paper will discuss details of the various implementations of light sources, cameras, synchronizing electronics and other diagnostic systems developed at Oak Ridge for the Tritium Proof-of-Principle (TPOP) experiment at the Los Alamos National Laboratory's Tritium System Test Assembly (TSTA), a system built for the Oak Ridge Advanced Toroidal Facility (ATF), and the Tritium Pellet Injector (TPI) built for the Princeton Tokamak Fusion Test Reactor (TFTR). Although a number of diagnostic systems were implemented on each pellet injector, the emphasis here will be on the development of a synchronization system for high-speed photography using pulsed light sources, standard video cameras, and video recorders. This system enabled near real-time visualization of the pellet shape, size and flight trajectory over a wide range of pellet speeds and at one or two positions along the flight path. Additionally, the system provides synchronization pulses to the data system for pseudo points along the flight path, such as the estimated plasma edge. This was accomplished using an electronic system that took the time measured between sets of light gates, and generated proportionally delayed triggers for light source strobes and pseudo points. Systems were built with two camera stations, one located after the end of the barrel, and a second camera located closer to the main reactor vessel wall. Two or three light gates were used to sense pellet velocity and various spacings were implemented on the three experiments. Both analog and digital schemes were examined for implementing the delay system. A digital technique was chosen

  13. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    Science.gov (United States)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  14. Monocular Camera/IMU/GNSS Integration for Ground Vehicle Navigation in Challenging GNSS Environments

    Directory of Open Access Journals (Sweden)

    Dennis Akos

    2012-03-01

    Full Text Available Low-cost MEMS-based IMUs, video cameras and portable GNSS devices are commercially available for automotive applications and some manufacturers have already integrated such facilities into their vehicle systems. GNSS provides positioning, navigation and timing solutions to users worldwide. However, signal attenuation, reflections or blockages may give rise to positioning difficulties. As opposed to GNSS, a generic IMU, which is independent of electromagnetic wave reception, can calculate a high-bandwidth navigation solution, however the output from a self-contained IMU accumulates errors over time. In addition, video cameras also possess great potential as alternate sensors in the navigation community, particularly in challenging GNSS environments and are becoming more common as options in vehicles. Aiming at taking advantage of these existing onboard technologies for ground vehicle navigation in challenging environments, this paper develops an integrated camera/IMU/GNSS system based on the extended Kalman filter (EKF. Our proposed integration architecture is examined using a live dataset collected in an operational traffic environment. The experimental results demonstrate that the proposed integrated system provides accurate estimations and potentially outperforms the tightly coupled GNSS/IMU integration in challenging environments with sparse GNSS observations.

  15. Monocular camera/IMU/GNSS integration for ground vehicle navigation in challenging GNSS environments.

    Science.gov (United States)

    Chu, Tianxing; Guo, Ningyan; Backén, Staffan; Akos, Dennis

    2012-01-01

    Low-cost MEMS-based IMUs, video cameras and portable GNSS devices are commercially available for automotive applications and some manufacturers have already integrated such facilities into their vehicle systems. GNSS provides positioning, navigation and timing solutions to users worldwide. However, signal attenuation, reflections or blockages may give rise to positioning difficulties. As opposed to GNSS, a generic IMU, which is independent of electromagnetic wave reception, can calculate a high-bandwidth navigation solution, however the output from a self-contained IMU accumulates errors over time. In addition, video cameras also possess great potential as alternate sensors in the navigation community, particularly in challenging GNSS environments and are becoming more common as options in vehicles. Aiming at taking advantage of these existing onboard technologies for ground vehicle navigation in challenging environments, this paper develops an integrated camera/IMU/GNSS system based on the extended Kalman filter (EKF). Our proposed integration architecture is examined using a live dataset collected in an operational traffic environment. The experimental results demonstrate that the proposed integrated system provides accurate estimations and potentially outperforms the tightly coupled GNSS/IMU integration in challenging environments with sparse GNSS observations.

  16. Monocular Camera/IMU/GNSS Integration for Ground Vehicle Navigation in Challenging GNSS Environments

    Science.gov (United States)

    Chu, Tianxing; Guo, Ningyan; Backén, Staffan; Akos, Dennis

    2012-01-01

    Low-cost MEMS-based IMUs, video cameras and portable GNSS devices are commercially available for automotive applications and some manufacturers have already integrated such facilities into their vehicle systems. GNSS provides positioning, navigation and timing solutions to users worldwide. However, signal attenuation, reflections or blockages may give rise to positioning difficulties. As opposed to GNSS, a generic IMU, which is independent of electromagnetic wave reception, can calculate a high-bandwidth navigation solution, however the output from a self-contained IMU accumulates errors over time. In addition, video cameras also possess great potential as alternate sensors in the navigation community, particularly in challenging GNSS environments and are becoming more common as options in vehicles. Aiming at taking advantage of these existing onboard technologies for ground vehicle navigation in challenging environments, this paper develops an integrated camera/IMU/GNSS system based on the extended Kalman filter (EKF). Our proposed integration architecture is examined using a live dataset collected in an operational traffic environment. The experimental results demonstrate that the proposed integrated system provides accurate estimations and potentially outperforms the tightly coupled GNSS/IMU integration in challenging environments with sparse GNSS observations. PMID:22736999

  17. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  18. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  19. Disembodied perspective: third-person images in GoPro videos

    OpenAIRE

    Bédard, Philippe

    2015-01-01

    Used as much in extreme-sports videos and professional productions as in amateur and home videos, GoPro wearable cameras have become ubiquitous in contemporary moving image culture. During its swift and ongoing rise in popularity, GoPro has also enabled the creation of new and unusual points of view, among which are “third-person images”. This article introduces and defines this particular phenomenon through an approach that deals with both the aesthetic and technical characteristics of the i...

  20. A video imaging system and related control hardware for nuclear safeguards surveillance applications

    International Nuclear Information System (INIS)

    Whichello, J.V.

    1987-03-01

    A novel video surveillance system has been developed for safeguards applications in nuclear installations. The hardware was tested at a small experimental enrichment facility located at the Lucas Heights Research Laboratories. The system uses digital video techniques to store, encode and transmit still television pictures over the public telephone network to a receiver located in the Australian Safeguards Office at Kings Cross, Sydney. A decoded, reconstructed picture is then obtained using a second video frame store. A computer-controlled video cassette recorder is used automatically to archive the surveillance pictures. The design of the surveillance system is described with examples of its operation

  1. Camtracker: a new camera controlled high precision solar tracker system for FTIR-spectrometers

    Directory of Open Access Journals (Sweden)

    M. Gisi

    2011-01-01

    Full Text Available A new system to very precisely couple radiation of a moving source into a Fourier Transform Infrared (FTIR Spectrometer is presented. The Camtracker consists of a homemade altazimuthal solar tracker, a digital camera and a homemade program to process the camera data and to control the motion of the tracker. The key idea is to evaluate the image of the radiation source on the entrance field stop of the spectrometer. We prove that the system reaches tracking accuracies of about 10 arc s for a ground-based solar absorption FTIR spectrometer, which is significantly better than current solar trackers. Moreover, due to the incorporation of a camera, the new system allows to document residual pointing errors and to point onto the solar disk center even in case of variable intensity distributions across the source due to cirrus or haze.

  2. NOAA Point Shapefile - Drop Camera transects, US Caribbean - Western Puerto Rico - Project NF-07-06-USVI-HAB - (2007), UTM 19N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater photos and/or video that were collected by NOAA scientists using a drop camera system. Photos and/or video were...

  3. NOAA Polyline Shapefile - Drop Camera transects, US Caribbean - Western Puerto Rico - Project NF-07-06-USVI-HAB - (2007), UTM 19N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater photos and/or video that were collected by NOAA scientists using a drop camera system. Photos and/or video were...

  4. Effects of Peer-Facilitated, Video-Based and Combined Peer-and-Video Education on Anxiety Among Patients Undergoing Coronary Angiography: Randomised controlled trial.

    Science.gov (United States)

    Habibzadeh, Hosein; Milan, Zahra D; Radfar, Moloud; Alilu, Leyla; Cund, Audrey

    2018-02-01

    Coronary angiography can be stressful for patients and anxiety-caused physiological responses during the procedure increase the risk of dysrhythmia, coronary artery spasms and rupture. This study therefore aimed to investigate the effects of peer, video and combined peer-and-video training on anxiety among patients undergoing coronary angiography. This single-blinded randomised controlled clinical trial was conducted at two large educational hospitals in Iran between April and July 2016. A total of 120 adult patients undergoing coronary angiography were recruited. Using a block randomisation method, participants were assigned to one of four groups, with those in the control group receiving no training and those in the three intervention groups receiving either peer-facilitated training, video-based training or a combination of both. A Persian-language validated version of the State-Trait Anxiety Inventory was used to measure pre- and post-intervention anxiety. There were no statistically significant differences in mean pre-intervention anxiety scores between the four groups (F = 0.31; P = 0.81). In contrast, there was a significant reduction in post-intervention anxiety among all three intervention groups compared to the control group (F = 27.71; P <0.01); however, there was no significant difference in anxiety level in terms of the type of intervention used. Peer, video and combined peer-and-video education were equally effective in reducing angiography-related patient anxiety. Such techniques are recommended to reduce anxiety amongst patients undergoing coronary angiography in hospitals in Iran.

  5. Remote Video Auditing in the Surgical Setting.

    Science.gov (United States)

    Pedersen, Anne; Getty Ritter, Elizabeth; Beaton, Megan; Gibbons, David

    2017-02-01

    Remote video auditing, a method first adopted by the food preparation industry, was later introduced to the health care industry as a novel approach to improving hand hygiene practices. This strategy yielded tremendous and sustained improvement, causing leaders to consider the potential effects of such technology on the complex surgical environment. This article outlines the implementation of remote video auditing and the first year of activity, outcomes, and measurable successes in a busy surgery department in the eastern United States. A team of anesthesia care providers, surgeons, and OR personnel used low-resolution cameras, large-screen displays, and cell phone alerts to make significant progress in three domains: application of the Universal Protocol for preventing wrong site, wrong procedure, wrong person surgery; efficiency metrics; and cleaning compliance. The use of cameras with real-time auditing and results-sharing created an environment of continuous learning, compliance, and synergy, which has resulted in a safer, cleaner, and more efficient OR. Copyright © 2017 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  6. Physics Girl: Where Education meets Cat Videos

    Science.gov (United States)

    Cowern, Dianna

    YouTube is usually considered an entertainment medium to watch cats, gaming, and music videos. But educational channels have been gaining momentum on the platform, some garnering millions of subscribers and billions of views. The Physics Girl YouTube channel is an educational series with PBS Digital Studios created by Dianna Cowern. Using Physics Girl as an example, this talk will examine what it takes to start a short-form educational video series, including logistics and resources. One benefit of video is that every failure is documented on camera and can, and will, be used in this talk as a learning tool. We will look at the channels demographical reach, discuss best practices for effective physics outreach, and survey how online media and technology can facilitate good and bad learning. The aim of this talk is to show how videos are a unique way to share science and enrich the learning experience, in and out of a classroom.

  7. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    Directory of Open Access Journals (Sweden)

    Heegwang Kim

    2017-12-01

    Full Text Available Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  8. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    Science.gov (United States)

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  9. Active Video Game Exercise Training Improves the Clinical Control of Asthma in Children: Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Evelim L F D Gomes

    Full Text Available The aim of the present study was to determine whether aerobic exercise involving an active video game system improved asthma control, airway inflammation and exercise capacity in children with moderate to severe asthma.A randomized, controlled, single-blinded clinical trial was carried out. Thirty-six children with moderate to severe asthma were randomly allocated to either a video game group (VGG; N = 20 or a treadmill group (TG; n = 16. Both groups completed an eight-week supervised program with two weekly 40-minute sessions. Pre-training and post-training evaluations involved the Asthma Control Questionnaire, exhaled nitric oxide levels (FeNO, maximum exercise testing (Bruce protocol and lung function.No differences between the VGG and TG were found at the baseline. Improvements occurred in both groups with regard to asthma control and exercise capacity. Moreover, a significant reduction in FeNO was found in the VGG (p < 0.05. Although the mean energy expenditure at rest and during exercise training was similar for both groups, the maximum energy expenditure was higher in the VGG.The present findings strongly suggest that aerobic training promoted by an active video game had a positive impact on children with asthma in terms of clinical control, improvement in their exercise capacity and a reduction in pulmonary inflammation.Clinicaltrials.gov NCT01438294.

  10. Experimental video signals distribution MMF network based on IEEE 802.11 standard

    Science.gov (United States)

    Kowalczyk, Marcin; Maksymiuk, Lukasz; Siuzdak, Jerzy

    2014-11-01

    The article was focused on presentation the achievements in a scope of experimental research on transmission of digital video streams in the frame of specially realized for this purpose ROF (Radio over Fiber) network. Its construction was based on the merge of wireless IEEE 802.11 network, popularly referred as Wi-Fi, with a passive optical network PON based on multimode fibers MMF. The proposed approach can constitute interesting proposal in area of solutions in the scope of the systems monitoring extensive, within which is required covering of a large area with ensuring of a relatively high degree of immunity on the interferences transmitted signals from video IP cameras to the monitoring center and a high configuration flexibility (easily change the deployment of cameras) of such network.

  11. The benefits of being a video gamer in laparoscopic surgery.

    Science.gov (United States)

    Sammut, Matthew; Sammut, Mark; Andrejevic, Predrag

    2017-09-01

    Video games are mainly considered to be of entertainment value in our society. Laparoscopic surgery and video games are activities similarly requiring eye-hand and visual-spatial skills. Previous studies have not conclusively shown a positive correlation between video game experience and improved ability to accomplish visual-spatial tasks in laparoscopic surgery. This study was an attempt to investigate this relationship. The aim of the study was to investigate whether previous video gaming experience affects the baseline performance on a laparoscopic simulator trainer. Newly qualified medical officers with minimal experience in laparoscopic surgery were invited to participate in the study and assigned to the following groups: gamers (n = 20) and non-gamers (n = 20). Analysis included participants' demographic data and baseline video gaming experience. Laparoscopic skills were assessed using a laparoscopic simulator trainer. There were no significant demographic differences between the two groups. Each participant performed three laparoscopic tasks and mean scores between the two groups were compared. The gamer group had statistically significant better results in maintaining the laparoscopic camera horizon ± 15° (p value = 0.009), in the complex ball manipulation accuracy rates (p value = 0.024) and completed the complex laparoscopic simulator task in a significantly shorter time period (p value = 0.001). Although prior video gaming experience correlated with better results, there were no significant differences for camera accuracy rates (p value = 0.074) and in a two-handed laparoscopic exercise task accuracy rates (p value = 0.092). The results show that previous video-gaming experience improved the baseline performance in laparoscopic simulator skills. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  12. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    OpenAIRE

    Orts-Escolano, Sergio; Garcia-Rodriguez, Jose; Morell, Vicente; Cazorla, Miguel; Azorin-Lopez, Jorge; García-Chamizo, Juan Manuel

    2014-01-01

    In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mob...

  13. Evaluation of intrusion sensors and video assessment in areas of restricted passage

    International Nuclear Information System (INIS)

    Hoover, C.E.; Ringler, C.E.

    1996-04-01

    This report discusses an evaluation of intrusion sensors and video assessment in areas of restricted passage. The discussion focuses on applications of sensors and video assessment in suspended ceilings and air ducts. It also includes current and proposed requirements for intrusion detection and assessment. Detection and nuisance alarm characteristics of selected sensors as well as assessment capabilities of low-cost board cameras were included in the evaluation

  14. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    and evaluated. On-board there are six video cameras each capturing images of 1024times1024 pixels of 12 bpp at a frame rate of 15 fps, thus totalling 1080 Mbits/s. In comparison the average downlink data rate for these images is projected to be 50 kbit/s. This calls for efficient on-board processing to select...

  15. stil113_0401r -- Point coverage of locations of still frames extracted from video imagery which depict sediment types

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  16. Accurate measurement of imaging photoplethysmographic signals based camera using weighted average

    Science.gov (United States)

    Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji

    2018-01-01

    Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.

  17. Multiple Camera Person Tracking in Multiple Layers Combining 2D and 3D Information

    OpenAIRE

    Arsié , Dejan; Schuller , Björn; Rigoll , Gerhard

    2008-01-01

    International audience; CCTV systems have been introduced in most public spaces in order to increase security. Video outputs are observed by human operators if possible but mostly used as a forensic tool. Therefore it seems desirable to automate video surveillance systems, in order to be able to detect potentially dangerous situations as soon as possible. Multi camera systems have seem to be the prerequisite for huge spaces where frequently occlusions appear. In this treatise we will present ...

  18. On-line video image processing system for real-time neutron radiography

    Energy Technology Data Exchange (ETDEWEB)

    Fujine, S; Yoneda, K; Kanda, K [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.

    1983-09-15

    The neutron radiography system installed at the E-2 experimental hole of the KUR (Kyoto University Reactor) has been used for some NDT applications in the nuclear field. The on-line video image processing system of this facility is introduced in this paper. A 0.5 mm resolution in images was obtained by using a super high quality TV camera developed for X-radiography viewing a NE-426 neutron-sensitive scintillator. The image of the NE-426 on a CRT can be observed directly and visually, thus many test samples can be sequentially observed when necessary for industrial purposes. The video image signals from the TV camera are digitized, with a 33 ms delay, through a video A/D converter (ADC) and can be stored in the image buffer (32 KB DRAM) of a microcomputer (Z-80) system. The digitized pictures are taken with 16 levels of gray scale and resolved to 240 x 256 picture elements (pixels) on a monochrome CRT, with the capability also to display 16 distinct colors on a RGB video display. The direct image of this system could be satisfactory for penetrating the side plates to test MTR type reactor fuels and for the investigation of moving objects.

  19. The effects of video game playing on attention, memory, and executive control.

    Science.gov (United States)

    Boot, Walter R; Kramer, Arthur F; Simons, Daniel J; Fabiani, Monica; Gratton, Gabriele

    2008-11-01

    Expert video game players often outperform non-players on measures of basic attention and performance. Such differences might result from exposure to video games or they might reflect other group differences between those people who do or do not play video games. Recent research has suggested a causal relationship between playing action video games and improvements in a variety of visual and attentional skills (e.g., [Green, C. S., & Bavelier, D. (2003). Action video game modifies visual selective attention. Nature, 423, 534-537]). The current research sought to replicate and extend these results by examining both expert/non-gamer differences and the effects of video game playing on tasks tapping a wider range of cognitive abilities, including attention, memory, and executive control. Non-gamers played 20+ h of an action video game, a puzzle game, or a real-time strategy game. Expert gamers and non-gamers differed on a number of basic cognitive skills: experts could track objects moving at greater speeds, better detected changes to objects stored in visual short-term memory, switched more quickly from one task to another, and mentally rotated objects more efficiently. Strikingly, extensive video game practice did not substantially enhance performance for non-gamers on most cognitive tasks, although they did improve somewhat in mental rotation performance. Our results suggest that at least some differences between video game experts and non-gamers in basic cognitive performance result either from far more extensive video game experience or from pre-existing group differences in abilities that result in a self-selection effect.

  20. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification

    Science.gov (United States)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-02-01

    Experimental or operational modal analysis traditionally requires physically-attached wired or wireless sensors for vibration measurement of structures. This instrumentation can result in mass-loading on lightweight structures, and is costly and time-consuming to install and maintain on large civil structures, especially for long-term applications (e.g., structural health monitoring) that require significant maintenance for cabling (wired sensors) or periodic replacement of the energy supply (wireless sensors). Moreover, these sensors are typically placed at a limited number of discrete locations, providing low spatial sensing resolution that is hardly sufficient for modal-based damage localization, or model correlation and updating for larger-scale structures. Non-contact measurement methods such as scanning laser vibrometers provide high-resolution sensing capacity without the mass-loading effect; however, they make sequential measurements that require considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation, optical flow), video camera based measurements have been successfully used for vibration measurements and subsequent modal analysis, based on techniques such as the digital image correlation (DIC) and the point-tracking. However, they typically require speckle pattern or high-contrast markers to be placed on the surface of structures, which poses challenges when the measurement area is large or inaccessible. This work explores advanced computer vision and video processing algorithms to develop a novel video measurement and vision-based operational (output-only) modal analysis method that alleviate the need of structural surface preparation associated with existing vision-based methods and can be implemented in a relatively efficient and autonomous manner with little