WorldWideScience

Sample records for video cameras installed

  1. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    Energy Technology Data Exchange (ETDEWEB)

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  2. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  3. Scanning laser video camera/ microscope

    Science.gov (United States)

    Wang, C. P.; Bow, R. T.

    1984-10-01

    A laser scanning system capable of scanning at standard video rate has been developed. The scanning mirrors, circuit design and system performance, as well as its applications to video cameras and ultra-violet microscopes, are discussed.

  4. Face identification in videos from mobile cameras

    OpenAIRE

    Mu, Meiru; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2014-01-01

    It is still challenging to recognize faces reliably in videos from mobile camera, although mature automatic face recognition technology for still images has been available for quite some time. Suppose we want to be alerted when suspects appear in the recording of a police Body-Cam, even a good face matcher on still images would give many false alarms due to the uncontrolled conditions. This paper presents an approach to identify faces in videos from mobile cameras. A commercial face matcher F...

  5. Installing Snowplow Cameras and Integrating Images into MnDOT's Traveler Information System

    Science.gov (United States)

    2017-10-01

    In 2015 and 2016, the Minnesota Department of Transportation (MnDOT) installed network video dash- and ceiling-mounted cameras on 226 snowplows, approximately one-quarter of MnDOT's total snowplow fleet. The cameras were integrated with the onboard m...

  6. Face identification in videos from mobile cameras

    NARCIS (Netherlands)

    Mu, Meiru; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2014-01-01

    It is still challenging to recognize faces reliably in videos from mobile camera, although mature automatic face recognition technology for still images has been available for quite some time. Suppose we want to be alerted when suspects appear in the recording of a police Body-Cam, even a good face

  7. Video inpainting under constrained camera motion.

    Science.gov (United States)

    Patwardhan, Kedar A; Sapiro, Guillermo; Bertalmío, Marcelo

    2007-02-01

    A framework for inpainting missing parts of a video sequence recorded with a moving or stationary camera is presented in this work. The region to be inpainted is general: it may be still or moving, in the background or in the foreground, it may occlude one object and be occluded by some other object. The algorithm consists of a simple preprocessing stage and two steps of video inpainting. In the preprocessing stage, we roughly segment each frame into foreground and background. We use this segmentation to build three image mosaics that help to produce time consistent results and also improve the performance of the algorithm by reducing the search space. In the first video inpainting step, we reconstruct moving objects in the foreground that are "occluded" by the region to be inpainted. To this end, we fill the gap as much as possible by copying information from the moving foreground in other frames, using a priority-based scheme. In the second step, we inpaint the remaining hole with the background. To accomplish this, we first align the frames and directly copy when possible. The remaining pixels are filled in by extending spatial texture synthesis techniques to the spatiotemporal domain. The proposed framework has several advantages over state-of-the-art algorithms that deal with similar types of data and constraints. It permits some camera motion, is simple to implement, fast, does not require statistical models of background nor foreground, works well in the presence of rich and cluttered backgrounds, and the results show that there is no visible blurring or motion artifacts. A number of real examples taken with a consumer hand-held camera are shown supporting these findings.

  8. A single pixel camera video ophthalmoscope

    Science.gov (United States)

    Lochocki, B.; Gambin, A.; Manzanera, S.; Irles, E.; Tajahuerce, E.; Lancis, J.; Artal, P.

    2017-02-01

    There are several ophthalmic devices to image the retina, from fundus cameras capable to image the whole fundus to scanning ophthalmoscopes with photoreceptor resolution. Unfortunately, these devices are prone to a variety of ocular conditions like defocus and media opacities, which usually degrade the quality of the image. Here, we demonstrate a novel approach to image the retina in real-time using a single pixel camera, which has the potential to circumvent those optical restrictions. The imaging procedure is as follows: a set of spatially coded patterns is projected rapidly onto the retina using a digital micro mirror device. At the same time, the inner product's intensity is measured for each pattern with a photomultiplier module. Subsequently, an image of the retina is reconstructed computationally. Obtained image resolution is up to 128 x 128 px with a varying real-time video framerate up to 11 fps. Experimental results obtained in an artificial eye confirm the tolerance against defocus compared to a conventional multi-pixel array based system. Furthermore, the use of a multiplexed illumination offers a SNR improvement leading to a lower illumination of the eye and hence an increase in patient's comfort. In addition, the proposed system could enable imaging in wavelength ranges where cameras are not available.

  9. VIDEO-BASED POINT CLOUD GENERATION USING MULTIPLE ACTION CAMERAS

    Directory of Open Access Journals (Sweden)

    T. Teo

    2015-05-01

    Full Text Available Due to the development of action cameras, the use of video technology for collecting geo-spatial data becomes an important trend. The objective of this study is to compare the image-mode and video-mode of multiple action cameras for 3D point clouds generation. Frame images are acquired from discrete camera stations while videos are taken from continuous trajectories. The proposed method includes five major parts: (1 camera calibration, (2 video conversion and alignment, (3 orientation modelling, (4 dense matching, and (5 evaluation. As the action cameras usually have large FOV in wide viewing mode, camera calibration plays an important role to calibrate the effect of lens distortion before image matching. Once the camera has been calibrated, the author use these action cameras to take video in an indoor environment. The videos are further converted into multiple frame images based on the frame rates. In order to overcome the time synchronous issues in between videos from different viewpoints, an additional timer APP is used to determine the time shift factor between cameras in time alignment. A structure form motion (SfM technique is utilized to obtain the image orientations. Then, semi-global matching (SGM algorithm is adopted to obtain dense 3D point clouds. The preliminary results indicated that the 3D points from 4K video are similar to 12MP images, but the data acquisition performance of 4K video is more efficient than 12MP digital images.

  10. Camera Networks The Acquisition and Analysis of Videos over Wide Areas

    CERN Document Server

    Roy-Chowdhury, Amit K

    2012-01-01

    As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide

  11. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    Science.gov (United States)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  12. A comparison of camera trap and permanent recording video camera efficiency in wildlife underpasses.

    Science.gov (United States)

    Jumeau, Jonathan; Petrod, Lana; Handrich, Yves

    2017-09-01

    In the current context of biodiversity loss through habitat fragmentation, the effectiveness of wildlife crossings, installed at great expense as compensatory measures, is of vital importance for ecological and socio-economic actors. The evaluation of these structures is directly impacted by the efficiency of monitoring tools (camera traps…), which are used to assess the effectiveness of these crossings by observing the animals that use them. The aim of this study was to quantify the efficiency of camera traps in a wildlife crossing evaluation. Six permanent recording video systems sharing the same field of view as six Reconyx HC600 camera traps installed in three wildlife underpasses were used to assess the exact proportion of missed events (event being the presence of an animal within the field of view), and the error rate concerning underpass crossing behavior (defined as either Entry or Refusal). A sequence of photographs was triggered by either animals (true trigger) or artefacts (false trigger). We quantified the number of false triggers that had actually been caused by animals that were not visible on the images ("false" false triggers). Camera traps failed to record 43.6% of small mammal events (voles, mice, shrews, etc.) and 17% of medium-sized mammal events. The type of crossing behavior (Entry or Refusal) was incorrectly assessed in 40.1% of events, with a higher error rate for entries than for refusals. Among the 3.8% of false triggers, 85% of them were "false" false triggers. This study indicates a global underestimation of the effectiveness of wildlife crossings for small mammals. Means to improve the efficiency are discussed.

  13. Initial laboratory evaluation of color video cameras: Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  14. Camera Control and Geo-Registration for Video Sensor Networks

    Science.gov (United States)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  15. Camcorder 101: Buying and Using Video Cameras.

    Science.gov (United States)

    Catron, Louis E.

    1991-01-01

    Lists nine practical applications of camcorders to theater companies and programs. Discusses the purchase of video gear, camcorder features, accessories, the use of the camcorder in the classroom, theater management, student uses, and video production. (PRA)

  16. Analysis of unstructured video based on camera motion

    Science.gov (United States)

    Abdollahian, Golnaz; Delp, Edward J.

    2007-01-01

    Although considerable work has been done in management of "structured" video such as movies, sports, and television programs that has known scene structures, "unstructured" video analysis is still a challenging problem due to its unrestricted nature. The purpose of this paper is to address issues in the analysis of unstructured video and in particular video shot by a typical unprofessional user (i.e home video). We describe how one can make use of camera motion information for unstructured video analysis. A new concept, "camera viewing direction," is introduced as the building block of home video analysis. Motion displacement vectors are employed to temporally segment the video based on this concept. We then find the correspondence between the camera behavior with respect to the subjective importance of the information in each segment and describe how different patterns in the camera motion can indicate levels of interest in a particular object or scene. By extracting these patterns, the most representative frames, keyframes, for the scenes are determined and aggregated to summarize the video sequence.

  17. Demonstrations of Optical Spectra with a Video Camera

    Science.gov (United States)

    Kraftmakher, Yaakov

    2012-01-01

    The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera…

  18. Controlled Impact Demonstration (CID) tail camera video

    Science.gov (United States)

    1984-01-01

    The Controlled Impact Demonstration (CID) was a joint research project by NASA and the FAA to test a survivable aircraft impact using a remotely piloted Boeing 720 aircraft. The tail camera movie is one shot running 27 seconds. It shows the impact from the perspective of a camera mounted high on the vertical stabilizer, looking forward over the fuselage and wings.

  19. Automatic Person Identification in Camera Video by Motion Correlation

    Directory of Open Access Journals (Sweden)

    Dingbo Duan

    2014-01-01

    Full Text Available Person identification plays an important role in semantic analysis of video content. This paper presents a novel method to automatically label persons in video sequence captured from fixed camera. Instead of leveraging traditional face recognition approaches, we deal with the task of person identification by fusing information from motion sensor platforms, like smart phones, carried on human bodies and extracted from camera video. More specifically, a sequence of motion features extracted from camera video are compared with each of those collected from accelerometers of smart phones. When strong correlation is detected, identity information transmitted from the corresponding smart phone is used to identify the phone wearer. To test the feasibility and efficiency of the proposed method, extensive experiments are conducted which achieved impressive performance.

  20. Improving photometric calibration of meteor video camera systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-09-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag , and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  1. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  2. Localization of cask and plug remote handling system in ITER using multiple video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, João, E-mail: jftferreira@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2013-10-15

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.

  3. Geometrical modelling and calibration of video cameras for underwater navigation

    Energy Technology Data Exchange (ETDEWEB)

    Melen, T.

    1994-11-01

    Video cameras and other visual sensors can provide valuable navigation information for underwater remotely operated vehicles. The thesis relates to the geometric modelling and calibration of video cameras. To exploit the accuracy potential of a video camera, all systematic errors must be modelled and compensated for. This dissertation proposes a new geometric camera model, where linear image plane distortion (difference in scale and lack of orthogonality between the image axes) is compensated for after, and separately from, lens distortion. The new model can be viewed as an extension of the linear or DLT (Direct Linear Transformation) model and as a modification of the model traditionally used in photogrammetry. The new model can be calibrated from both planar and nonplanar calibration objects. The feasibility of the model is demonstrated in a typical camera calibration experiment, which indicates that the new model is more accurate than the traditional one. It also gives a simple solution to the problem of computing undistorted image coordinates from distorted ones. Further, the dissertation suggests how to get initial estimates for all the camera model parameters, how to select the number of parameters modelling lens distortion and how to reduce the dimension of the search space in the nonlinear optimization. There is also a discussion on the use of analytical partial derivates. The new model is particularly well suited for video images with non-square pixels, but it may also advantagely be used with professional photogrammetric equipment. 63 refs., 11 figs., 6 tabs.

  4. Court Reconstruction for Camera Calibration in Broadcast Basketball Videos.

    Science.gov (United States)

    Wen, Pei-Chih; Cheng, Wei-Chih; Wang, Yu-Shuen; Chu, Hung-Kuo; Tang, Nick C; Liao, Hong-Yuan Mark

    2016-05-01

    We introduce a technique of calibrating camera motions in basketball videos. Our method particularly transforms player positions to standard basketball court coordinates and enables applications such as tactical analysis and semantic basketball video retrieval. To achieve a robust calibration, we reconstruct the panoramic basketball court from a video, followed by warping the panoramic court to a standard one. As opposed to previous approaches, which individually detect the court lines and corners of each video frame, our technique considers all video frames simultaneously to achieve calibration; hence, it is robust to illumination changes and player occlusions. To demonstrate the feasibility of our technique, we present a stroke-based system that allows users to retrieve basketball videos. Our system tracks player trajectories from broadcast basketball videos. It then rectifies the trajectories to a standard basketball court by using our camera calibration method. Consequently, users can apply stroke queries to indicate how the players move in gameplay during retrieval. The main advantage of this interface is an explicit query of basketball videos so that unwanted outcomes can be prevented. We show the results in Figs. 1, 7, 9, 10 and our accompanying video to exhibit the feasibility of our technique.

  5. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  6. Ball lightning observation: an objective video-camera analysis report

    OpenAIRE

    Sello, Stefano; Viviani, Paolo; Paganini, Enrico

    2011-01-01

    In this paper we describe a video-camera recording of a (probable) ball lightning event and both the related image and signal analyses for its photometric and dynamical characterization. The results strongly support the BL nature of the recorded luminous ball object and allow the researchers to have an objective and unique video document of a possible BL event for further analyses. Some general evaluations of the obtained results considering the proposed ball lightning models conclude the paper.

  7. Real-Time Facial Expression Transfer with Single Video Camera

    OpenAIRE

    Liu, S.; Yang, Xiaosong; Wang, Z.; Xiao, Zhidong; Zhang, J.

    2016-01-01

    Facial expression transfer is currently an active research field. However, 2D image wrapping based methods suffer from depth ambiguity and specific hardware is required for depth-based methods to work. We present a novel markerless, real time online facial transfer method that requires only a single video camera. Our method adapts a model to user specific facial data, computes expression variances in real time and rapidly transfers them to another target. Our method can be applied to videos w...

  8. Automated safety control by video cameras

    NARCIS (Netherlands)

    Lefter, I.; Rothkrantz, L.; Somhorst, M.

    2012-01-01

    At this moment many surveillance systems are installed in public domains to control the safety of people and properties. They are constantly watched by human operators who are easily overloaded. To support the human operators, a surveillance system model is designed that detects suspicious behaviour

  9. Solid-State Video Camera for the Accelerator Environment

    Energy Technology Data Exchange (ETDEWEB)

    Brown, R

    2004-05-27

    Solid-State video cameras employing CMOS technology have been developed and tested for several years in the SLAC accelerator; notably the PEPII (BaBar) injection lines. They have proven much more robust than their CCD counterparts in radiation areas. Repair is simple, inexpensive, and generates very little radioactive waste.

  10. CameraCast: flexible access to remote video sensors

    Science.gov (United States)

    Kong, Jiantao; Ganev, Ivan; Schwan, Karsten; Widener, Patrick

    2007-01-01

    New applications like remote surveillance and online environmental or traffic monitoring are making it increasingly important to provide flexible and protected access to remote video sensor devices. Current systems use application-level codes like web-based solutions to provide such access. This requires adherence to user-level APIs provided by such services, access to remote video information through given application-specific service and server topologies, and that the data being captured and distributed is manipulated by third party service codes. CameraCast is a simple, easily used system-level solution to remote video access. It provides a logical device API so that an application can identically operate on local vs. remote video sensor devices, using its own service and server topologies. In addition, the application can take advantage of API enhancements to protect remote video information, using a capability-based model for differential data protection that offers fine grain control over the information made available to specific codes or machines, thereby limiting their ability to violate privacy or security constraints. Experimental evaluations of CameraCast show that the performance of accessing remote video information approximates that of accesses to local devices, given sufficient networking resources. High performance is also attained when protection restrictions are enforced, due to an efficient kernel-level realization of differential data protection.

  11. Teacher training for using digital video camera in primary education

    Directory of Open Access Journals (Sweden)

    Pablo García Sempere

    2011-12-01

    Full Text Available This paper shows the partial results of a research carried out in primary schools, which evaluates the ability of teachers in the use of digital video camera. The study took place in the province of Granada, Spain. Our purpose was to know the level of knowledge, interest, difficulties and training needs so as to improve the teaching practice. The work has been done from a descriptive and ecletic approach. Quantitative (questionnaire and qualitative techniques (focus group have been used in this research. The information obtained shows that most of the teachers have a lack of knowledge in the use of video camera and digital edition. On the other hand, the majority agrees to include initial and permanent training on this subject. Finally, the most important conclusions are presented.

  12. A method to synchronise video cameras using the audio band.

    Science.gov (United States)

    Leite de Barros, Ricardo Machado; Guedes Russomanno, Tiago; Brenzikofer, René; Jovino Figueroa, Pascual

    2006-01-01

    This paper proposes and evaluates a novel method for synchronisation of video cameras using the audio band. The method consists in generating and transmitting an audio signal through radio frequency for receivers connected to the microphone input of the cameras and inserting the signal in the audio band. In a software environment, the phase differences among the video signals are calculated and used to interpolate the synchronous 2D projections of the trajectories. The validation of the method was based on: (1) Analysis of the phase difference changes as a function of time of two video signals. (2) Comparison between the values measured with an oscilloscope and by the proposed method. (3) Estimation of the improvement in the accuracy in the measurements of the distance between two markers mounted on a rigid body during movement applying the method. The results showed that the phase difference changes in time slowly (0.150 ms/min) and linearly, even when the same model of cameras are used. The values measured by the proposed method and by oscilloscope showed equivalence (R2=0.998), the root mean square of the difference between the measurements was 0.10 ms and the maximum difference found was 0.31 ms. Applying the new method, the accuracy of the 3D reconstruction had a statistically significant improvement. The accuracy, simplicity and wide applicability of the proposed method constitute the main contributions of this work.

  13. Machine vision: recent advances in CCD video camera technology

    Science.gov (United States)

    Easton, Richard A.; Hamilton, Ronald J.

    1997-09-01

    This paper describes four state-of-the-art digital video cameras, which provide advanced features that benefit computer image enhancement, manipulation, and analysis. These cameras were designed to reduce the complexity of imaging systems while increasing the accuracy, dynamic range, and detail enhancement of product inspections. Two cameras utilize progressive scan CCD sensors enabling the capture of high- resolution image of moving objects without the need for strobe lights or mechanical shutters. The second progressive scan camera has an unusually high resolution of 1280 by 1024 and a choice of serial or parallel digital interface for data and control. The other two cameras incorporate digital signal processing (DSP) technology for improved dynamic range, more accurate determination of color, white balance stability, and enhanced contrast of part features against the background. Successful applications and future product development trends are discussed. A brief description of analog and digital image capture devices will address the most common questions regarding interface requirements within a typical machine vision system overview.

  14. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    Science.gov (United States)

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  15. Outdoor Markerless Motion Capture With Sparse Handheld Video Cameras.

    Science.gov (United States)

    Wang, Yangang; Liu, Yebin; Tong, Xin; Dai, Qionghai; Tan, Ping

    2017-04-12

    We present a method for outdoor markerless motion capture with sparse handheld video cameras. In the simplest setting, it only involves two mobile phone cameras following the character. This setup can maximize the flexibilities of data capture and broaden the applications of motion capture. To solve the character pose under such challenge settings, we exploit the generative motion capture methods and propose a novel model-view consistency that considers both foreground and background in the tracking stage. The background is modeled as a deformable 2D grid, which allows us to compute the background-view consistency for sparse moving cameras. The 3D character pose is tracked with a global-local optimization through minimizing our consistency cost. A novel L1 motion regularizer is also proposed in the optimization to constrain the solution pose space. The whole process of the proposed method is simple as frame by frame video segmentation is not required. Our method outperforms several alternative methods on various examples demonstrated in the paper.

  16. Simultaneous Camera Path Optimization and Distraction Removal for Improving Amateur Video.

    Science.gov (United States)

    Zhang, Fang-Lue; Wang, Jue; Zhao, Han; Martin, Ralph R; Hu, Shi-Min

    2015-12-01

    A major difference between amateur and professional video lies in the quality of camera paths. Previous work on video stabilization has considered how to improve amateur video by smoothing the camera path. In this paper, we show that additional changes to the camera path can further improve video aesthetics. Our new optimization method achieves multiple simultaneous goals: 1) stabilizing video content over short time scales; 2) ensuring simple and consistent camera paths over longer time scales; and 3) improving scene composition by automatically removing distractions, a common occurrence in amateur video. Our approach uses an L(1) camera path optimization framework, extended to handle multiple constraints. Two passes of optimization are used to address both low-level and high-level constraints on the camera path. The experimental and user study results show that our approach outputs video that is perceptually better than the input, or the results of using stabilization only.

  17. Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses

    Science.gov (United States)

    Liu, Rong; Unger, John A.; Scullion, Vicki A.

    2014-01-01

    Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

  18. Scientists Behind the Camera - Increasing Video Documentation in the Field

    Science.gov (United States)

    Thomson, S.; Wolfe, J.

    2013-12-01

    Over the last two years, Skypunch Creative has designed and implemented a number of pilot projects to increase the amount of video captured by scientists in the field. The major barrier to success that we tackled with the pilot projects was the conflicting demands of the time, space, storage needs of scientists in the field and the demands of shooting high quality video. Our pilots involved providing scientists with equipment, varying levels of instruction on shooting in the field and post-production resources (editing and motion graphics). In each project, the scientific team was provided with cameras (or additional equipment if they owned their own), tripods, and sometimes sound equipment, as well as an external hard drive to return the footage to us. Upon receiving the footage we professionally filmed follow-up interviews and created animations and motion graphics to illustrate their points. We also helped with the distribution of the final product (http://climatescience.tv/2012/05/the-story-of-a-flying-hippo-the-hiaper-pole-to-pole-observation-project/ and http://climatescience.tv/2013/01/bogged-down-in-alaska/). The pilot projects were a success. Most of the scientists returned asking for additional gear and support for future field work. Moving out of the pilot phase, to continue the project, we have produced a 14 page guide for scientists shooting in the field based on lessons learned - it contains key tips and best practice techniques for shooting high quality footage in the field. We have also expanded the project and are now testing the use of video cameras that can be synced with sensors so that the footage is useful both scientifically and artistically. Extract from A Scientist's Guide to Shooting Video in the Field

  19. The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras

    Science.gov (United States)

    Bird, Jo; Colliver, Yeshe; Edwards, Susan

    2014-01-01

    Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

  20. Video astronomy on the go using video cameras with small telescopes

    CERN Document Server

    Ashley, Joseph

    2017-01-01

    Author Joseph Ashley explains video astronomy's many benefits in this comprehensive reference guide for amateurs. Video astronomy offers a wonderful way to see objects in far greater detail than is possible through an eyepiece, and the ability to use the modern, entry-level video camera to image deep space objects is a wonderful development for urban astronomers in particular, as it helps sidestep the issue of light pollution. The author addresses both the positive attributes of these cameras for deep space imaging as well as the limitations, such as amp glow. The equipment needed for imaging as well as how it is configured is identified with hook-up diagrams and photographs. Imaging techniques are discussed together with image processing (stacking and image enhancement). Video astronomy has evolved to offer great results and great ease of use, and both novices and more experienced amateurs can use this book to find the set-up that works best for them. Flexible and portable, they open up a whole new way...

  1. Underwater video enhancement using multi-camera super-resolution

    Science.gov (United States)

    Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.

    2017-12-01

    Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.

  2. Frequency Identification of Vibration Signals Using Video Camera Image Data

    Directory of Open Access Journals (Sweden)

    Chia-Hung Wu

    2012-10-01

    Full Text Available This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  3. QHY (5L-II-M) CCD camera for video meteor observation

    Science.gov (United States)

    Korec, M.

    2015-01-01

    A new digital camera and lens has been tested for video meteor observing. A Tamron M13VG308 lens combined with a QHY 5L-II-M digital camera proved to be the best combination. Test observations have shown this to be superior to the best analog Watec 902H2 Ultimate camera.

  4. Contact freezing observed with a high speed video camera

    Science.gov (United States)

    Hoffmann, Nadine; Koch, Michael; Kiselev, Alexei; Leisner, Thomas

    2017-04-01

    Freezing of supercooled cloud droplets on collision with ice nucleating particle (INP) has been considered as one of the most effective heterogeneous freezing mechanisms. Potentially, it could play an important role in rapid glaciation of a mixed phase cloud especially if coupled with ice multiplication mechanism active at moderate subzero temperatures. The necessary condition for such coupling would be, among others, the presence of very efficient INPs capable of inducing ice nucleation of the supercooled drizzle droplets in the temperature range of -5°C to -20°C. Some mineral dust particles (K-feldspar) and biogenic INPs (pseudomonas bacteria, birch pollen) have been recently identified as such very efficient INPs. However, if observed with a high speed video (HSV) camera, the contact nucleation induced by these two classes of INPs exhibits a very different behavior. Whereas bacterial INPs can induce freezing within a millisecond after initial contact with supercooled water, birch pollen need much more time to initiate freezing. The mineral dust particles seem to induce ice nucleation faster than birch pollen but slower than bacterial INPs. In this contribution we show the HSV records of individual supercooled droplets suspended in an electrodynamic balance and colliding with airborne INPs of various types. The HSV camera is coupled with a long-working-distance microscope, allowing us to observe the contact nucleation of ice at very high spatial and temporal resolution. The average time needed to initiate freezing has been measured depending on the INP species. This time do not necessarily correlate with the contact freezing efficiency of the ice nucleating particles. We discuss possible mechanisms explaining this behavior and potential implications for future ice nucleation research.

  5. Simultaneous monitoring of a collapsing landslide with video cameras

    Directory of Open Access Journals (Sweden)

    K. Fujisawa

    2008-01-01

    Full Text Available Effective countermeasures and risk management to reduce landslide hazards require a full understanding of the processes of collapsing landslides. While the processes are generally estimated from the features of debris deposits after collapse, simultaneous monitoring during collapse provides more insights into the processes. Such monitoring, however, is usually very difficult, because it is rarely possible to predict when a collapse will occur. This study introduces a rare case in which a collapsing landslide (150 m in width and 135 m in height was filmed with three video cameras in Higashi-Yokoyama, Gifu Prefecture, Japan. The cameras were set up in the front and on the right and left sides of the slide in May 2006, one month after a series of small slope failures in the toe and the formation of cracks on the head indicated that a collapse was imminent.

    The filmed images showed that the landslide collapse started from rock falls and slope failures occurring mainly around the margin, that is, the head, sides and toe. These rock falls and slope failures, which were individually counted on the screen, increased with time. Analyzing the images, five of the failures were estimated to have each produced more than 1000 m3 of debris, and the landslide collapsed with several surface failures accompanied by a toppling movement. The manner of the collapse suggested that the slip surface initially remained on the upper slope, and then extended down the slope as the excessive internal stress shifted downwards. Image analysis, together with field measurements using a ground-based laser scanner after the collapse, indicated that the landslide produced a total of 50 000 m3 of debris.

    As described above, simultaneous monitoring provides valuable information about landslide processes. Further development of monitoring techniques will help clarify landslide processes qualitatively as well as quantitatively.

  6. Automatic Level Control for Video Cameras towards HDR Techniques

    Directory of Open Access Journals (Sweden)

    de With PeterHN

    2010-01-01

    Full Text Available We give a comprehensive overview of the complete exposure processing chain for video cameras. For each step of the automatic exposure algorithm we discuss some classical solutions and propose their improvements or give new alternatives. We start by explaining exposure metering methods, describing types of signals that are used as the scene content descriptors as well as means to utilize these descriptors. We also discuss different exposure control types used for the control of lens, integration time of the sensor, and gain control, such as a PID control, precalculated control based on the camera response function, and propose a new recursive control type that matches the underlying image formation model. Then, a description of commonly used serial control strategy for lens, sensor exposure time, and gain is presented, followed by a proposal of a new parallel control solution that integrates well with tone mapping and enhancement part of the image pipeline. Parallel control strategy enables faster and smoother control and facilitates optimally filling the dynamic range of the sensor to improve the SNR and an image contrast, while avoiding signal clipping. This is archived by the proposed special control modes used for better display and correct exposure of both low-dynamic range and high-dynamic range images. To overcome the inherited problems of limited dynamic range of capturing devices we discuss a paradigm of multiple exposure techniques. Using these techniques we can enable a correct rendering of difficult class of high-dynamic range input scenes. However, multiple exposure techniques bring several challenges, especially in the presence of motion and artificial light sources such as fluorescent lights. In particular, false colors and light-flickering problems are described. After briefly discussing some known possible solutions for the motion problem, we focus on solving the fluorescence-light problem. Thereby, we propose an algorithm for

  7. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    Directory of Open Access Journals (Sweden)

    Semi Jeon

    2017-02-01

    Full Text Available Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i robust feature detection using particle keypoints between adjacent frames; (ii camera path estimation and smoothing; and (iii rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV. The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.

  8. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    Science.gov (United States)

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-01-01

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622

  9. Development of a 3D Flash LADAR Video Camera for Entry, Decent and Landing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced Scientific Concepts, Inc. (ASC) has developed a 128 x 128 frame, 3D Flash LADAR video camera capable of a 30 Hz frame rate. Because Flash LADAR captures an...

  10. Development of a 3D Flash LADAR Video Camera for Entry, Decent, and Landing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced Scientific Concepts, Inc. (ASC) has developed a 128 x 128 frame, 3D Flash LADAR video camera which produces 3-D point clouds at 30 Hz. Flash LADAR captures...

  11. Feasibility of Using Video Camera for Automated Enforcement on Red-Light Running and Managed Lanes.

    Science.gov (United States)

    2009-12-25

    The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and HOV occupancy requirement using video cameras in Nevada. This objective was a...

  12. Using a Video Camera to Measure the Radius of the Earth

    Science.gov (United States)

    Carroll, Joshua; Hughes, Stephen

    2013-01-01

    A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…

  13. Performance evaluation of a two detector camera for real-time video

    NARCIS (Netherlands)

    Lochocki, Benjamin; Gambín-regadera, Adrián; Artal, Pablo

    2016-01-01

    Single pixel imaging can be the preferred method over traditional 2D-array imaging in spectral ranges where conventional cameras are not available. However, when it comes to real-time video imaging, single pixel imaging cannot compete with the framerates of conventional cameras, especially when

  14. Still-Video Photography: Tomorrow's Electronic Cameras in the Hands of Today's Photojournalists.

    Science.gov (United States)

    Foss, Kurt; Kahan, Robert S.

    This paper examines the still-video camera and its potential impact by looking at recent experiments and by gathering information from some of the few people knowledgeable about the new technology. The paper briefly traces the evolution of the tools and processes of still-video photography, examining how photographers and their work have been…

  15. Digital video technology and production 101: lights, camera, action.

    Science.gov (United States)

    Elliot, Diane L; Goldberg, Linn; Goldberg, Michael J

    2014-01-01

    Videos are powerful tools for enhancing the reach and effectiveness of health promotion programs. They can be used for program promotion and recruitment, for training program implementation staff/volunteers, and as elements of an intervention. Although certain brief videos may be produced without technical assistance, others often require collaboration and contracting with professional videographers. To get practitioners started and to facilitate interactions with professional videographers, this Tool includes a guide to the jargon of video production and suggestions for how to integrate videos into health education and promotion work. For each type of video, production principles and issues to consider when working with a professional videographer are provided. The Tool also includes links to examples in each category of video applications to health promotion.

  16. Attention and Distraction: On the Aesthetic Experience of Video Installation Art

    Directory of Open Access Journals (Sweden)

    Petersen, Anne Ring

    2010-10-01

    Full Text Available This article aims to examine the interrelationship between attention and distraction in the reception of video installation art, a genre which is commonly associated with "immersion" and an intensified feeling of presence in the discourses on new media art and installation art. This tends to veil the fact that the behaviour of many visitors is characterised by a certain restlessness and distraction. The article suggests that, in contradistinction to traditional disciplines of art like painting and sculpture, video installations seem to stimulate a "reception in distraction" (Walter Benjamin that is at odds with the ideal of a reception in concentration that governs the institutions of fine art as well as aesthetic theory. It intends to demonstrate how the experience of video installation art can only be understood by recognising that the close connections between, on the one hand, video art and, on the other hand, the cultural formations of television, film and computers have fundamentally re-configured "aesthetic experience."

  17. Video Installation, Memory and Storytelling: the viewer as narrator

    Directory of Open Access Journals (Sweden)

    Diane Charleson

    2011-05-01

    Full Text Available

    Abstract: Much has been written about memory and its link with the visual where memory is likened to our recollection of vignettes or visual traces. Conway (1999 tells us that the brain takes in experience as word and image.  Gibson (2002 suggests that “imagistic cognition” is a process whereby we run image sequences through our heads while trying to make sense of experience. He links this psychological phenomenon with notions of film editing theory and practice. He goes on to suggest that the power of the cinema is linked to this primal experience of remembering that elicits the intense pleasures of childhood and access to a means of navigating the self. This paper will explore the role video installation can play in creating an open, enticing, non-threatening and immersive environment, where viewers can transcend the everyday, reflect on their own memories and recall their personal stories. I will argue that there is a symbiotic link between what I will call the viewer as flâneur and the producer of the work such that a new form of storytelling can be created through this relationship.

    Résumé: Il existe une littérature abondante sur les liens entre la mémoire et l'image, notamment en ce qui concerne le traitement de traces visuelles par les fonctions mémorielles. Conway (1999 insiste quant à lui sur le fait que le

  18. A unified framework for capturing facial images in video surveillance systems using cooperative camera system

    Science.gov (United States)

    Chan, Fai; Moon, Yiu-Sang; Chen, Jiansheng; Ma, Yiu-Kwan; Tsang, Wai-Hung; Fu, Kah-Kuen

    2008-04-01

    Low resolution and un-sharp facial images are always captured from surveillance videos because of long human-camera distance and human movements. Previous works addressed this problem by using an active camera to capture close-up facial images without considering human movements and mechanical delays of the active camera. In this paper, we proposed a unified framework to capture facial images in video surveillance systems by using one static and active camera in a cooperative manner. Human faces are first located by a skin-color based real-time face detection algorithm. A stereo camera model is also employed to approximate human face location and his/her velocity with respect to the active camera. Given the mechanical delays of the active camera, the position of a target face with a given delay can be estimated using a Human-Camera Synchronization Model. By controlling the active camera with corresponding amount of pan, tilt, and zoom, a clear close-up facial image of a moving human can be captured then. We built the proposed system in an 8.4-meter indoor corridor. Results show that the proposed stereo camera configuration can locate faces with average error of 3%. In addition, it is capable of capturing facial images of a walking human clearly in first instance in 90% of the test cases.

  19. A Video Camera Road Sign System of the Early Warning from Collision with the Wild Animals

    Directory of Open Access Journals (Sweden)

    Matuska Slavomir

    2016-05-01

    Full Text Available This paper proposes a camera road sign system of the early warning, which can help to avoid from vehicle collision with the wild animals. The system consists of camera modules placed down the particularly chosen route and the intelligent road signs. The camera module consists of the camera device and the computing unit. The video stream is captured from video camera using computing unit. Then the algorithms of object detection are deployed. Afterwards, the machine learning algorithms will be used to classify the moving objects. If the moving object is classified as animal and this animal can be dangerous for safety of the vehicle, warning will be displayed on the intelligent road sings.

  20. Interactive video installations in public spaces: Rafael Lozano-Hemmer's Under Scan

    OpenAIRE

    Papadaki, Elena

    2015-01-01

    Under Scan, described as an ‘interactive video art installation for public space’, was presented in Trafalgar Square, a tourist attraction in central London, from 15 to 23 November 2008 as part of the Relational Architecture series by the artist Rafael Lozano-Hemmer. Apart from its credentials as the largest interactive video installation and the longest-running event to be presented in Trafalgar Square (Vanagan, 2009, p. 86), it constitutes an interesting case study for this chapter in order...

  1. video115_0403 -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  2. video114_0402c -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  3. video114_0402b -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  4. Quality Analysis of Massive High-Definition Video Streaming in Two-Tiered Embedded Camera-Sensing Systems

    OpenAIRE

    Joongheon Kim; Eun-Seok Ryu

    2014-01-01

    This paper presents the quality analysis results of high-definition video streaming in two-tiered camera sensor network applications. In the camera-sensing system, multiple cameras sense visual scenes in their target fields and transmit the video streams via IEEE 802.15.3c multigigabit wireless links. However, the wireless transmission introduces interferences to the other links. This paper analyzes the capacity degradation due to the interference impacts from the camera-sensing nodes to the ...

  5. Passive millimeter-wave video camera for aviation applications

    Science.gov (United States)

    Fornaca, Steven W.; Shoucri, Merit; Yujiri, Larry

    1998-07-01

    Passive Millimeter Wave (PMMW) imaging technology offers significant safety benefits to world aviation. Made possible by recent technological breakthroughs, PMMW imaging sensors provide visual-like images of objects under low visibility conditions (e.g., fog, clouds, snow, sandstorms, and smoke) which blind visual and infrared sensors. TRW has developed an advanced, demonstrator version of a PMMW imaging camera that, when front-mounted on an aircraft, gives images of the forward scene at a rate and quality sufficient to enhance aircrew vision and situational awareness under low visibility conditions. Potential aviation uses for a PMMW camera are numerous and include: (1) Enhanced vision for autonomous take- off, landing, and surface operations in Category III weather on Category I and non-precision runways; (2) Enhanced situational awareness during initial and final approach, including Controlled Flight Into Terrain (CFIT) mitigation; (3) Ground traffic control in low visibility; (4) Enhanced airport security. TRW leads a consortium which began flight tests with the demonstration PMMW camera in September 1997. Flight testing will continue in 1998. We discuss the characteristics of PMMW images, the current state of the technology, the integration of the camera with other flight avionics to form an enhanced vision system, and other aviation applications.

  6. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    Science.gov (United States)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  7. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i generation of a three-dimensional (3D human model; (ii human object-based automatic scene calibration; and (iii metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  8. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    Science.gov (United States)

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  9. GoPro Hero Cameras for Creation of a Three-Dimensional, Educational, Neurointerventional Video.

    Science.gov (United States)

    Park, Min S; Brock, Andrea; Mortimer, Vance; Taussky, Philipp; Couldwell, William T; Quigley, Edward

    2017-10-01

    Neurointerventional education relies on an apprenticeship model, with the trainee observing and participating in procedures with the guidance of a mentor. While educational videos are becoming prevalent in surgical cases, there is a dearth of comparable educational material for trainees in neurointerventional programs. We sought to create a high-quality, three-dimensional video of a routine diagnostic cerebral angiogram for use as an educational tool. A diagnostic cerebral angiogram was recorded using two GoPro HERO 3+ cameras with the Dual HERO System to capture the proceduralist's hands during the case. This video was edited with recordings from the video monitors to create a real-time three-dimensional video of both the actions of the neurointerventionalist and the resulting wire/catheter movements. The final edited video, in either two or three dimensions, can serve as another instructional tool for the training of residents and/or fellows. Additional videos can be created in a similar fashion of more complicated neurointerventional cases. The GoPro HERO 3+ camera and Dual HERO System can be used to create educational videos of neurointerventional procedures.

  10. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    Science.gov (United States)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  11. [Electro-mechanic steering device for head-lamp mounted miniature video cameras].

    Science.gov (United States)

    Ilgner, J; Westhofen, M

    2003-05-01

    Endoscopic or microscopic video recordings set a widely established standard for medico-legal documentation of operative procedures. In addition, they are an essential part of undergraduate as well as postgraduate medical education. Macroscopic operations in the head and neck can be recorded by miniaturised video cameras attached to the surgeon's head lamp. The authors present an electro-mechanic steering device which has been designed to overcome the parallax error created with a head-mounted video camera, especially as the distance of the camera to the operative field varies. The device can be operated by the theatre staff, while the sterility of the operative field is maintained and the surgeon's physical working range remains unrestricted. As the video image is reliably centred to the operative field throughout the procedure, a better orientation and understanding for spectators who are unfamiliar with the surgical steps is obtained. While other adverse factors to macroscopic head-mounted video recordings, such as involuntary head movements of the surgeon, remain unchanged, the device adds to a higher quality of video documentation as it relieves the surgeon from adjusting the image field to the regions of interest. Additional benefit could be derived from an auto-focus feature or from image stabilising devices.

  12. Online coupled camera pose estimation and dense reconstruction from video

    Science.gov (United States)

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  13. The large Debye-Scherrer camera installed at SPring-8 BL02B2 for charge density studies

    CERN Document Server

    Nishibori, E; Kato, K; Sakata, M; Kubota, Y; Aoyagi, S; Kuroiwa, Y; Yamakata, M; Ikeda, N

    2001-01-01

    The design and performance of a large Debye-Scherrer Camera with imaging plate (IP) as a detector, which was very recently installed at SPring-8, BL02B2, is reported. By taking advantage of high beam quality of SPring-8, the camera enables one a rapid collection of a high counting statistics and high angular resolution powder pattern, which can lead to accurate structure analyses. The camera also provides easy access to structural changes at varied temperatures between 15-1000 K. The camera provides a rapid and accurate powder diffraction system utilizing third generation SR.

  14. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    Science.gov (United States)

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  15. Optimization of radiation sensors for a passive terahertz video camera for security applications

    NARCIS (Netherlands)

    Zieger, G.J.M.

    2014-01-01

    A passive terahertz video camera allows for fast security screenings from distances of several meters. It avoids irradiation or the impressions of nakedness, which oftentimes cause embarrassment and trepidation of the concerned persons. This work describes the optimization of highly sensitive

  16. Video content analysis on body-worn cameras for retrospective investigation

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; ter Haar, Frank B.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Burghouts, Gertjan J.; Wijn, Remco; van den Broek, Sebastiaan P.; van Rest, Jeroen H. C.

    2015-10-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications and effects, such as the reduction of violence between police and citizens. However, the increased use of bodycams also creates potential challenges. For example: how can end-users extract information from the abundance of video, how can the information be presented, and how can an officer retrieve information efficiently? Nevertheless, such video gives the opportunity to stimulate the professionals' memory, and support complete and accurate reporting. In this paper, we show how video content analysis (VCA) can address these challenges and seize these opportunities. To this end, we focus on methods for creating a complete summary of the video, which allows quick retrieval of relevant fragments. The content analysis for summarization consists of several components, such as stabilization, scene selection, motion estimation, localization, pedestrian tracking and action recognition in the video from a bodycam. The different components and visual representations of summaries are presented for retrospective investigation.

  17. The effects of camera jitter for background subtraction algorithms on fused infrared-visible video streams

    Science.gov (United States)

    Becker, Stefan; Scherer-Negenborn, Norbert; Thakkar, Pooja; Hübner, Wolfgang; Arens, Michael

    2016-10-01

    This paper is a continuation of the work of Becker et al.1 In their work, they analyzed the robustness of various background subtraction algorithms on fused video streams originating from visible and infrared cameras. In order to cover a broader range of background subtraction applications, we show the effects of fusing infrared-visible video streams from vibrating cameras on a large set of background subtraction algorithms. The effectiveness is quantitatively analyzed on recorded data of a typical outdoor sequence with a fine-grained and accurate annotation of the images. Thereby, we identify approaches which can benefit from fused sensor signals with camera jitter. Finally conclusions on what fusion strategies should be preferred under such conditions are given.

  18. EDICAM fast video diagnostic installation on the COMPASS tokamak

    Czech Academy of Sciences Publication Activity Database

    Szappanos, A.; Berta, M.; Hron, Martin; Pánek, Radomír; Stöckel, Jan; Veres, G.; Weinzettl, Vladimír; Zoletnik, S.; Tulipán, S.

    2010-01-01

    Roč. 85, 3-4 (2010), s. 370-373 ISSN 0920-3796. [IAEA Technical Meeting on Control, Data Acquisition and Remote Participation for Fusion Research/7th./. Aix – en – Provence, 15.06.2009-19.06.2009] Institutional research plan: CEZ:AV0Z20430508 Keywords : Video diagnostic * Fast data processing * CMOS sensor * Image processing Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 1.143, year: 2010 http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V3C-4Y0C2FK-1&_user=6542793&_coverDate=07%2F31%2F2010&_rdoc=1&_fmt=high&_orig= search &_origin= search &_sort=d&_docanchor=&view=c&_acct=C000070123&_version=1&_urlVersion=0&_userid=6542793&md5=99eb6704be38e61ac7e2316cb63a7ee9& search type=a

  19. Surgical video recording with a modified GoPro Hero 4 camera

    Directory of Open Access Journals (Sweden)

    Lin LK

    2016-01-01

    Full Text Available Lily Koo Lin Department of Ophthalmology and Vision Science, University of California, Davis Eye Center, Sacramento, CA, USA Background: Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Method: The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Results: Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. Conclusion: The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination. Keywords: teaching, oculoplastic, strabismus

  20. Surgical video recording with a modified GoPro Hero 4 camera.

    Science.gov (United States)

    Lin, Lily Koo

    2016-01-01

    Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination.

  1. Hardware-based smart camera for recovering high dynamic range video from multiple exposures

    Science.gov (United States)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2014-10-01

    In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.

  2. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  3. A novel method to reduce time investment when processing videos from camera trap studies.

    Directory of Open Access Journals (Sweden)

    Kristijn R R Swinnen

    Full Text Available Camera traps have proven very useful in ecological, conservation and behavioral research. Camera traps non-invasively record presence and behavior of animals in their natural environment. Since the introduction of digital cameras, large amounts of data can be stored. Unfortunately, processing protocols did not evolve as fast as the technical capabilities of the cameras. We used camera traps to record videos of Eurasian beavers (Castor fiber. However, a large number of recordings did not contain the target species, but instead empty recordings or other species (together non-target recordings, making the removal of these recordings unacceptably time consuming. In this paper we propose a method to partially eliminate non-target recordings without having to watch the recordings, in order to reduce workload. Discrimination between recordings of target species and non-target recordings was based on detecting variation (changes in pixel values from frame to frame in the recordings. Because of the size of the target species, we supposed that recordings with the target species contain on average much more movements than non-target recordings. Two different filter methods were tested and compared. We show that a partial discrimination can be made between target and non-target recordings based on variation in pixel values and that environmental conditions and filter methods influence the amount of non-target recordings that can be identified and discarded. By allowing a loss of 5% to 20% of recordings containing the target species, in ideal circumstances, 53% to 76% of non-target recordings can be identified and discarded. We conclude that adding an extra processing step in the camera trap protocol can result in large time savings. Since we are convinced that the use of camera traps will become increasingly important in the future, this filter method can benefit many researchers, using it in different contexts across the globe, on both videos and

  4. A digital underwater video camera system for aquatic research in regulated rivers

    Science.gov (United States)

    Martin, Benjamin M.; Irwin, Elise R.

    2010-01-01

    We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

  5. Video camera system for locating bullet holes in targets at a ballistics tunnel

    Science.gov (United States)

    Burner, A. W.; Rummler, D. R.; Goad, W. K.

    1990-01-01

    A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

  6. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  7. Object Tracking in Frame-Skipping Video Acquired Using Wireless Consumer Cameras

    Directory of Open Access Journals (Sweden)

    Anlong Ming

    2012-10-01

    Full Text Available Object tracking is an important and fundamental task in computer vision and its high-level applications, e.g., intelligent surveillance, motion-based recognition, video indexing, traffic monitoring and vehicle navigation. However, the recent widespread use of wireless consumer cameras often produces low quality videos with frame-skipping and this makes object tracking difficult. Previous tracking methods, for example, generally depend heavily on object appearance or motion continuity and cannot be directly applied to frame-skipping videos. In this paper, we propose an improved particle filter for object tracking to overcome the frame-skipping difficulties. The novelty of our particle filter lies in using the detection result of erratic motion to ameliorate the transition model for a better trial distribution. Experimental results show that the proposed approach improves the tracking accuracy in comparison with the state-of-the-art methods, even when both the object and the consumer are in motion.

  8. Monitoring the body temperature of cows and calves using video recordings from an infrared thermography camera.

    Science.gov (United States)

    Hoffmann, Gundula; Schmidt, Mariana; Ammon, Christian; Rose-Meierhöfer, Sandra; Burfeind, Onno; Heuwieser, Wolfgang; Berg, Werner

    2013-06-01

    The aim of this study was to assess the variability of temperatures measured by a video-based infrared camera (IRC) in comparison to rectal and vaginal temperatures. The body surface temperatures of cows and calves were measured contactless at different body regions using videos from the IRC. Altogether, 22 cows and 9 calves were examined. The differences of the measured IRC temperatures among the body regions, i.e. eye (mean: 37.0 °C), back of the ear (35.6 °C), shoulder (34.9 °C) and vulva (37.2 °C), were significant (P infrared thermography videos has the advantage to analyze more than 1 picture per animal in a short period of time, and shows potential as a monitoring system for body temperatures in cattle.

  9. Bird-borne video-cameras show that seabird movement patterns relate to previously unrevealed proximate environment, not prey

    National Research Council Canada - National Science Library

    Tremblay, Yann; Thiebault, Andréa; Mullers, Ralf; Pistorius, Pierre

    2014-01-01

    ... environmental contexts. Here, using miniaturized video cameras and GPS tracking recorders simultaneously, we show for the first time that information on the immediate visual surroundings of a foraging seabird, the Cape...

  10. Ultrahigh-definition color video camera system with 4K-scanning lines

    Science.gov (United States)

    Mitani, Kohji; Sugawara, Masayuki; Shimamoto, Hiroshi; Yamashita, Takayuki; Okano, Fumio

    2003-05-01

    An experimental ultrahigh-definition color video camera system with 7680(H) × 4320(V) pixels has been developed using four 8-million-pixel CCDs. The 8-million-pixel CCD with a progressive scanning rate of 60 frames per second has 4046(H) × 2048(V) effective imaging pixels, each of which is 8.4 micron2. We applied the four-imager pickup method to increase the camera"s resolution. This involves attaching four CCDs to a special color-separation prism. Two CCDs are used for the green image, and the other two are used for red and blue. The spatial image sampling pattern of these CCDs to the optical image is equivalent to one with 32 million pixels in the Bayer pattern color filter. The prototype camera attains a limiting resolution of more than 2700 TV lines both horizontally and vertically, which is higher than that of an 8-million-CCD. The sensitivity of the camera is 2000 lux, F 2.8 at approx. 50 dB of dark-noise level on the HDTV format. Its other specifications are a dynamic range of 200%, a power consumption of about 600 W and a weight, with lens, of 76 kg.

  11. Performance Test of the First Prototype of 2 Ways Video Camera for the Muon Barrel Position Monitor

    CERN Document Server

    Brunel, Laurent; Bondar, Tamas; Bencze, Gyorgy; Raics, Peter; Szabó, Jozsef

    1998-01-01

    The CMS Barrel Position Monitor is based on 360 video cameras mounted on 36 very stable mechanical structures. One type of camera is used to observe optical sources mounted on the muon chambers. A first prototype was produced to test the main performances. This report gives the experimental results about stability, linearity and temperature effects.

  12. Utilization of an video camera in study of the goshawk (Accipiter gentilis diet

    Directory of Open Access Journals (Sweden)

    Martin Tomešek

    2011-01-01

    Full Text Available In 2009, research was carried out into the food spectrum of goshawk (Accipiter gentilis by means of automatic digital video cameras with a recoding device in the area of the Chřiby Upland. The monitoring took place at two localities in the vicinity of the village of Buchlovice at the southeastern edge of the Chřiby Upland in a period from hatching the chicks to their flying out from a nest. The unambiguous advantage of using the camera systems at the study of food spectrum is a possibility of the exact determination of brought preys in the majority of cases. As much as possible economic and effective technology prepared according to given conditions was used. Results of using automatic digital video cameras with a recoding device consist in a number of valuable data, which clarify the food spectrum of a given species. The main output of the whole project is determination of the food spectrum of goshawk (Accipiter gentilis from two localities, which showed the following composition: 89 % birds, 9.5 % mammals and 1.5 % other animals or unidentifiable components of food. Birds of the genus Turdus were the most frequent prey in both cases of monitoring. As for mammals, Sciurus vulgaris was most frequent.

  13. High-sensitive thermal video camera with self-scanned 128 InSb linear array

    Science.gov (United States)

    Fujisada, Hiroyuki

    1991-12-01

    A compact thermal video camera with very high sensitivity has been developed by using a self-scanned 128 InSb linear array photodiode. Two-dimensional images are formed by a self- scanning function of the linear array focal plane assembly in the horizontal direction and by a vibration mirror in the vertical direction. Images with 128 X 128 pixel number are obtained every 1/30 seconds. A small size InSb detector array with a total length of 7.68 mm is utilized in order to build the compact system. In addition, special consideration is given to a configuration of optics, vibration mirror, and focal plane assembly. Real-time signal processing by a microprocessor is carried out to compensate inhomogeneous sensitivities and irradiances for each detector. The standard NTSC TV format is employed for output video signals. The thermal video camera developed had a very high radiometric sensitivity. Minimum resolvable temperature difference (MRTD) is estimated at about 0.02 K for 300 K target. The stable operation is possible without blackbody reference, because of very small stray radiation.

  14. Research on the use and problems of digital video camera from the perspective of schools primary teacher of Granada province

    Directory of Open Access Journals (Sweden)

    Pablo José García Sempere

    2012-12-01

    Full Text Available The adoption of ICT in society and specifically in schools is changing the relationships and traditional means of teaching. These new situations require teachers to assume new roles and responsibilities, thereby creating new demands for training. The teaching body concurs that "teachers require both and initial and ongoing training in the use of digital video cameras and video editing." This article presents the main results of research that focused on the applications of digital video camera for teachers of primary education schools in the province of Granada, Spain.

  15. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    Science.gov (United States)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  16. VideoWeb Dataset for Multi-camera Activities and Non-verbal Communication

    Science.gov (United States)

    Denina, Giovanni; Bhanu, Bir; Nguyen, Hoang Thanh; Ding, Chong; Kamal, Ahmed; Ravishankar, Chinya; Roy-Chowdhury, Amit; Ivers, Allen; Varda, Brenda

    Human-activity recognition is one of the most challenging problems in computer vision. Researchers from around the world have tried to solve this problem and have come a long way in recognizing simple motions and atomic activities. As the computer vision community heads toward fully recognizing human activities, a challenging and labeled dataset is needed. To respond to that need, we collected a dataset of realistic scenarios in a multi-camera network environment (VideoWeb) involving multiple persons performing dozens of different repetitive and non-repetitive activities. This chapter describes the details of the dataset. We believe that this VideoWeb Activities dataset is unique and it is one of the most challenging datasets available today. The dataset is publicly available online at http://vwdata.ee.ucr.edu/ along with the data annotation.

  17. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    Energy Technology Data Exchange (ETDEWEB)

    Pardini, A.F.

    1998-01-27

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing.

  18. Visual fatigue modeling for stereoscopic video shot based on camera motion

    Science.gov (United States)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  19. vid116_0501n -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  20. vid116_0501c -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  1. vid116_0501s -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  2. vid116_0501d -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  3. Evaluating a public display installation with game and video to raise awareness of Attention Deficit Hyperactivity Disorder

    OpenAIRE

    Craven, Michael P.; Simons, Lucy; Gillott, Alinda; North, Steve; Schnädelbach, Holger; Young, Zoe

    2015-01-01

    Networked Urban Screens offer new possibilities for public health education and awareness. An information video about Attention Deficit Hyperactivity Disorder (ADHD) was combined with a custom browser-based video game and successfully deployed on an existing research platform, Screens in the Wild (SitW). The SitW platform consists of 46-in. touchscreen or interactive displays, a camera, a microphone and a speaker, deployed at four urban locations in England. Details of the platform and softwa...

  4. Quantitative underwater 3D motion analysis using submerged video cameras: accuracy analysis and trajectory reconstruction.

    Science.gov (United States)

    Silvatti, Amanda P; Cerveri, Pietro; Telles, Thiago; Dias, Fábio A S; Baroni, Guido; Barros, Ricardo M L

    2013-01-01

    In this study we aim at investigating the applicability of underwater 3D motion capture based on submerged video cameras in terms of 3D accuracy analysis and trajectory reconstruction. Static points with classical direct linear transform (DLT) solution, a moving wand with bundle adjustment and a moving 2D plate with Zhang's method were considered for camera calibration. As an example of the final application, we reconstructed the hand motion trajectories in different swimming styles and qualitatively compared this with Maglischo's model. Four highly trained male swimmers performed butterfly, breaststroke and freestyle tasks. The middle fingertip trajectories of both hands in the underwater phase were considered. The accuracy (mean absolute error) of the two calibration approaches (wand: 0.96 mm - 2D plate: 0.73 mm) was comparable to out of water results and highly superior to the classical DLT results (9.74 mm). Among all the swimmers, the hands' trajectories of the expert swimmer in the style were almost symmetric and in good agreement with Maglischo's model. The kinematic results highlight symmetry or asymmetry between the two hand sides, intra- and inter-subject variability in terms of the motion patterns and agreement or disagreement with the model. The two outcomes, calibration results and trajectory reconstruction, both move towards the quantitative 3D underwater motion analysis.

  5. Design and Optimization of the VideoWeb Wireless Camera Network

    Directory of Open Access Journals (Sweden)

    Nguyen HoangThanh

    2010-01-01

    Full Text Available Sensor networks have been a very active area of research in recent years. However, most of the sensors used in the development of these networks have been local and nonimaging sensors such as acoustics, seismic, vibration, temperature, humidity. The emerging development of video sensor networks poses its own set of unique challenges, including high-bandwidth and low latency requirements for real-time processing and control. This paper presents a systematic approach by detailing the design, implementation, and evaluation of a large-scale wireless camera network, suitable for a variety of practical real-time applications. We take into consideration issues related to hardware, software, control, architecture, network connectivity, performance evaluation, and data-processing strategies for the network. We also perform multiobjective optimization on settings such as video resolution and compression quality to provide insight into the performance trade-offs when configuring such a network and present lessons learned in the building and daily usage of the network.

  6. Design of video surveillance and tracking system based on attitude and heading reference system and PTZ camera

    Science.gov (United States)

    Yang, Jian; Xie, Xiaofang; Wang, Yan

    2017-04-01

    Based on the AHRS (Attitude and Heading Reference System) and PTZ (Pan/Tilt/Zoom) camera, we designed a video monitoring and tracking system. The overall structure of the system and the software design are given. The key technologies such as serial port communication and head attitude tracking are introduced, and the codes of the key part are given.

  7. Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video content analysis tasks in large-scale ad-hoc networks

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.

    2017-10-01

    Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.

  8. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    Science.gov (United States)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  9. Modeling camera orientation and 3D structure from a sequence of images taken by a perambulating commercial video camera

    Science.gov (United States)

    M-Rouhani, Behrouz; Anderson, James A. D. W.

    1997-04-01

    In this paper we report the degree of reliability of image sequences taken by off-the-shelf TV cameras for modeling camera rotation and reconstructing 3D structure using computer vision techniques. This is done in spite of the fact that computer vision systems usually use imaging devices that are specifically designed for the human vision. Our scenario consists of a static scene and a mobile camera moving through the scene. The scene is any long axial building dominated by features along the three principal orientations and with at least one wall containing prominent repetitive planar features such as doors, windows bricks etc. The camera is an ordinary commercial camcorder moving along the axial axis of the scene and is allowed to rotate freely within the range +/- 10 degrees in all directions. This makes it possible that the camera be held by a walking unprofessional cameraman with normal gait, or to be mounted on a mobile robot. The system has been tested successfully on sequence of images of a variety of structured, but fairly cluttered scenes taken by different walking cameramen. The potential application areas of the system include medicine, robotics and photogrammetry.

  10. USING A DIGITAL VIDEO CAMERA AS THE SMART SENSOR OF THE SYSTEM FOR AUTOMATIC PROCESS CONTROL OF GRANULAR FODDER MOLDING

    Directory of Open Access Journals (Sweden)

    M. M. Blagoveshchenskaya

    2014-01-01

    Full Text Available Summary. The most important operation of granular mixed fodder production is molding process. Properties of granular mixed fodder are defined during this process. They determine the process of production and final product quality. The possibility of digital video camera usage as intellectual sensor for control system in process of production is analyzed in the article. The developed parametric model of the process of bundles molding from granular fodder mass is presented in the paper. Dynamic characteristics of the molding process were determined. A mathematical model of motion of bundle of granular fodder mass after matrix holes was developed. The developed mathematical model of the automatic control system (ACS with the use of etalon video frame as the set point in the MATLAB software environment was shown. As a parameter of the bundles molding process it is proposed to use the value of the specific area defined in the mathematical treatment of the video frame. The algorithms of the programs to determine the changes in structural and mechanical properties of the feed mass in video frames images were developed. Digital video shooting of various modes of the molding machine was carried out and after the mathematical processing of video the transfer functions for use as a change of adjustable parameters of the specific area were determined. Structural and functional diagrams of the system of regulation of the food bundles molding process with the use of digital camcorders were built and analyzed. Based on the solution of the equations of fluid dynamics mathematical model of bundle motion after leaving the hole matrix was obtained. In addition to its viscosity, creep property was considered that is characteristic of the feed mass. The mathematical model ACS of the bundles molding process allowing to investigate transient processes which occur in the control system that uses a digital video camera as the smart sensor was developed in Simulink

  11. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room.

    Science.gov (United States)

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana

    2015-10-01

    To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  12. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    Directory of Open Access Journals (Sweden)

    Akshay Gopinathan Nair

    2015-01-01

    Full Text Available Objective: To study the utility of a commercially available small, portable ultra-high definition (HD camera (GoPro Hero 4 for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon′s head. Due care was taken to protect the patient′s identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each. The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon′s head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  13. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  14. Video content analysis on body-worn cameras for retrospective investigation

    NARCIS (Netherlands)

    Bouma, H.; Baan, J.; Haar, F.B. ter; Eendebak, P.T.; Hollander, R.J.M. den; Burghouts, G.J.; Wijn, R.; Broek, S.P. van den; Rest, J.H.C. van

    2015-01-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications

  15. HDR {sup 192}Ir source speed measurements using a high speed video camera

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca, Gabriel P. [Instituto de Pesquisas Energéticas e Nucleares—IPEN-CNEN/SP, São Paulo 05508-000, Brazil and Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Viana, Rodrigo S. S.; Yoriyaz, Hélio [Instituto de Pesquisas Energéticas e Nucleares—IPEN-CNEN/SP, São Paulo 05508-000 (Brazil); Podesta, Mark [Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Rubo, Rodrigo A.; Sales, Camila P. de [Hospital das Clínicas da Universidade de São Paulo—HC/FMUSP, São Paulo 05508-000 (Brazil); Reniers, Brigitte [Department of Radiation Oncology - MAASTRO, GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Research Group NuTeC, CMK, Hasselt University, Agoralaan Gebouw H, Diepenbeek B-3590 (Belgium); Verhaegen, Frank, E-mail: frank.verhaegen@maastro.nl [Department of Radiation Oncology - MAASTRO, GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Medical Physics Unit, Department of Oncology, McGill University, Montréal, Québec H3G 1A4 (Canada)

    2015-01-15

    Purpose: The dose delivered with a HDR {sup 192}Ir afterloader can be separated into a dwell component, and a transit component resulting from the source movement. The transit component is directly dependent on the source speed profile and it is the goal of this study to measure accurate source speed profiles. Methods: A high speed video camera was used to record the movement of a {sup 192}Ir source (Nucletron, an Elekta company, Stockholm, Sweden) for interdwell distances of 0.25–5 cm with dwell times of 0.1, 1, and 2 s. Transit dose distributions were calculated using a Monte Carlo code simulating the source movement. Results: The source stops at each dwell position oscillating around the desired position for a duration up to (0.026 ± 0.005) s. The source speed profile shows variations between 0 and 81 cm/s with average speed of ∼33 cm/s for most of the interdwell distances. The source stops for up to (0.005 ± 0.001) s at nonprogrammed positions in between two programmed dwell positions. The dwell time correction applied by the manufacturer compensates the transit dose between the dwell positions leading to a maximum overdose of 41 mGy for the considered cases and assuming an air-kerma strength of 48 000 U. The transit dose component is not uniformly distributed leading to over and underdoses, which is within 1.4% for commonly prescribed doses (3–10 Gy). Conclusions: The source maintains its speed even for the short interdwell distances. Dose variations due to the transit dose component are much lower than the prescribed treatment doses for brachytherapy, although transit dose component should be evaluated individually for clinical cases.

  16. Bird-Borne Video-Cameras Show That Seabird Movement Patterns Relate to Previously Unrevealed Proximate Environment, Not Prey: e88424

    National Research Council Canada - National Science Library

    Yann Tremblay; Andréa Thiebault; Ralf Mullers; Pierre Pistorius

    2014-01-01

    ... environmental contexts. Here, using miniaturized video cameras and GPS tracking recorders simultaneously, we show for the first time that information on the immediate visual surroundings of a foraging seabird, the Cape...

  17. Point Counts Underestimate the Importance of Arctic Foxes as Avian Nest Predators: Evidence from Remote Video Cameras in Arctic Alaskan Oil Fields

    National Research Council Canada - National Science Library

    Joseph R. Liebezeit; Steve Zack

    2008-01-01

    We used video cameras to identify nest predators at active shorebird and passerine nests and conducted point count surveys separately to determine species richness and detection frequency of potential...

  18. Modeling 3D Unknown object by Range Finder and Video Camera ...

    African Journals Online (AJOL)

    real world); proprioceptive and exteroceptive sensors allowing the recreating of the 3D geometric database of an environment (virtual world). The virtual world is projected onto a video display terminal (VDT). Computer-generated and video ...

  19. Fast auto-acquisition tomography tilt series by using HD video camera in ultra-high voltage electron microscope.

    Science.gov (United States)

    Nishi, Ryuji; Cao, Meng; Kanaji, Atsuko; Nishida, Tomoki; Yoshida, Kiyokazu; Isakozawa, Shigeto

    2014-11-01

    The ultra-high voltage electron microscope (UHVEM) H-3000 with the world highest acceleration voltage of 3 MV can observe remarkable three dimensional microstructures of microns-thick samples[1]. Acquiring a tilt series of electron tomography is laborious work and thus an automatic technique is highly desired. We proposed the Auto-Focus system using image Sharpness (AFS)[2,3] for UHVEM tomography tilt series acquisition. In the method, five images with different defocus values are firstly acquired and the image sharpness are calculated. The sharpness are then fitted to a quasi-Gaussian function to decide the best focus value[3]. Defocused images acquired by the slow scan CCD (SS-CCD) camera (Hitachi F486BK) are of high quality but one minute is taken for acquisition of five defocused images.In this study, we introduce a high-definition video camera (HD video camera; Hamamatsu Photonics K. K. C9721S) for fast acquisition of images[4]. It is an analog camera but the camera image is captured by a PC and the effective image resolution is 1280×1023 pixels. This resolution is lower than that of the SS-CCD camera of 4096×4096 pixels. However, the HD video camera captures one image for only 1/30 second. In exchange for the faster acquisition the S/N of images are low. To improve the S/N, 22 captured frames are integrated so that each image sharpness is enough to become lower fitting error. As countermeasure against low resolution, we selected a large defocus step, which is typically five times of the manual defocus step, to discriminate different defocused images.By using HD video camera for autofocus process, the time consumption for each autofocus procedure was reduced to about six seconds. It took one second for correction of an image position and the total correction time was seven seconds, which was shorter by one order than that using SS-CCD camera. When we used SS-CCD camera for final image capture, it took 30 seconds to record one tilt image. We can obtain a tilt

  20. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    Energy Technology Data Exchange (ETDEWEB)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L [UT MD Anderson Cancer Center, Houston, TX (United States); UT Graduate School of Biomedical Sciences, Houston, TX (United States); Yang, J; Beadle, B [UT MD Anderson Cancer Center, Houston, TX (United States)

    2014-06-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.

  1. Modeling 3D Unknown object by Range Finder and Video Camera ...

    African Journals Online (AJOL)

    Computer-generated and video images are superimposed. The man-machine interface functions deal mainly with on line building of graphic aids to improve perception, updating the geometric database of the robotic site, and video control of the robot. The superimposition of the real and virtual worlds is carried out through ...

  2. Studying complex decision making in natural settings: using a head-mounted video camera to study competitive orienteering.

    Science.gov (United States)

    Omodei, M M; McLennan, J

    1994-12-01

    Head-mounted video recording is described as a potentially powerful method for studying decision making in natural settings. Most alternative data-collection procedures are intrusive and disruptive of the decision-making processes involved while conventional video-recording procedures are either impractical or impossible. As a severe test of the robustness of the methodology we studied the decision making of 6 experienced orienteers who carried a head-mounted light-weight video camera as they navigated, running as fast as possible, around a set of control points in a forest. Use of the Wilcoxon matched-pairs signed-ranks test indicated that compared with free recall, video-assisted recall evoked (a) significantly greater experiential immersion in the recall, (b) significantly more specific recollections of navigation-related thoughts and feelings, (c) significantly more realizations of map and terrain features and aspects of running speed which were not noticed at the time of actual competition, and (d) significantly greater insight into specific navigational errors and the intrusion of distracting thoughts into the decision-making process. Potential applications of the technique in (a) the environments of emergency services, (b) therapeutic contexts, (c) education and training, and (d) sports psychology are discussed.

  3. Surgical video recording with a modified GoPro Hero 4 camera

    National Research Council Canada - National Science Library

    Lin, Lily Koo

    2016-01-01

    .... This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery...

  4. Real-Time Range Sensing Video Camera for Human/Robot Interfacing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In comparison to stereovision, it is well known that structured-light illumination has distinct advantages including the use of only one camera, being significantly...

  5. Compressed Natural Gas Installation. A Video-Based Training Program for Vehicle Conversion. Instructor's Edition.

    Science.gov (United States)

    Oklahoma State Dept. of Vocational and Technical Education, Stillwater. Curriculum and Instructional Materials Center.

    This instructor's guide contains the materials required to teach four competency-based course units of instruction in installing compressed natural gas (CNG) systems in motor vehicles. It is designed to accompany an instructional videotape (not included) on CNG installation. The following competencies are covered in the four instructional units:…

  6. Small high-definition video cameras as a tool to resight uniquely marked Interior Least Terns (Sternula antillarum athalassos)

    Science.gov (United States)

    Toy, Dustin L.; Roche, Erin; Dovichin, Colin M.

    2017-01-01

    Many bird species of conservation concern have behavioral or morphological traits that make it difficult for researchers to determine if the birds have been uniquely marked. Those traits can also increase the difficulty for researchers to decipher those markers. As a result, it is a priority for field biologists to develop time- and cost-efficient methods to resight uniquely marked individuals, especially when efforts are spread across multiple States and study areas. The Interior Least Tern (Sternula antillarum athalassos) is one such difficult-to-resight species; its tendency to mob perceived threats, such as observing researchers, makes resighting marked individuals difficult without physical recapture. During 2015, uniquely marked adult Interior Least Terns were resighted and identified by small, inexpensive, high-definition portable video cameras deployed for 29-min periods adjacent to nests. Interior Least Tern individuals were uniquely identified 84% (n = 277) of the time. This method also provided the ability to link individually marked adults to a specific nest, which can aid in generational studies and understanding heritability for difficult-to-resight species. Mark-recapture studies on such species may be prone to sparse encounter data that can result in imprecise or biased demographic estimates and ultimately flawed inferences. High-definition video cameras may prove to be a robust method for generating reliable demographic estimates.

  7. Spatial and temporal scales of shoreline morphodynamics derived from video camera observations for the island of Sylt, German Wadden Sea

    Science.gov (United States)

    Blossier, Brice; Bryan, Karin R.; Daly, Christopher J.; Winter, Christian

    2017-04-01

    Spatial and temporal scales of beach morphodynamics were assessed for the island of Sylt, German Wadden Sea, based on continuous video camera monitoring data from 2011 to 2014 along a 1.3 km stretch of sandy beach. They served to quantify, at this location, the amount of shoreline variability covered by beach monitoring schemes, depending on the time interval and alongshore resolution of the surveys. Correlation methods, used to quantify the alongshore spatial scales of shoreline undulations, were combined with semi-empirical modelling and spectral analyses of shoreline temporal fluctuations. The data demonstrate that an alongshore resolution of 150 m and a monthly survey time interval capture 70% of the kilometre-scale shoreline variability over the 2011-2014 study period. An alongshore spacing of 10 m and a survey time interval of 5 days would be required to monitor 95% variance of the shoreline temporal fluctuations with steps of 5% changes in variance over space. Although monitoring strategies such as land or airborne surveying are reliable methods of data collection, video camera deployment remains the cheapest technique providing the high spatiotemporal resolution required to monitor subkilometre-scale morphodynamic processes involving, for example, small- to middle-sized beach nourishment.

  8. A new method to calculate the camera focusing area and player position on playfield in soccer video

    Science.gov (United States)

    Liu, Yang; Huang, Qingming; Ye, Qixiang; Gao, Wen

    2005-07-01

    Sports video enrichment is attracting many researchers. People want to appreciate some highlight segments with cartoon. In order to automatically generate these cartoon video, we have to estimate the players" and ball"s 3D position. In this paper, we propose an algorithm to cope with the former problem, i.e. to compute players" position on court. For the image with sufficient corresponding points, the algorithm uses these points to calibrate the map relationship between image and playfield plane (called as homography). For the images without enough corresponding points, we use global motion estimation (GME) and the already calibrated image to compute the images" homographies. Thus, the problem boils down to estimating global motion. To enhance the performance of global motion estimation, two strategies are exploited. The first one is removing the moving objects based on adaptive GMM playfield detection, which can eliminate the influence of non-still object; The second one is using LKT tracking feature points to determine horizontal and vertical translation, which makes the optimization process for GME avoid being trapped into local minimum. Thus, if some images of a sequence can be calibrated directly from the intersection points of court line, all images of the sequence can by calibrated through GME. When we know the homographies between image and playfield, we can compute the camera focusing area and players" position in real world. We have tested our algorithm on real video and the result is encouraging.

  9. Performance of compact ICU (intensified camera unit) with autogating based on video signal

    Science.gov (United States)

    de Groot, Arjan; Linotte, Peter; van Veen, Django; de Witte, Martijn; Laurent, Nicolas; Hiddema, Arend; Lalkens, Fred; van Spijker, Jan

    2007-10-01

    High quality night vision digital video is nowadays required for many observation, surveillance and targeting applications, including several of the current soldier modernization programs. We present the performance increase that is obtained when combining a state-of-the-art image intensifier with a low power consumption CMOS image sensor. Based on the content of the video signal, the gating and gain of the image intensifier are optimized for best SNR. The options of the interface with a separate laser in the application for range gated imaging are discussed.

  10. Lights, Camera, Action: Facilitating the Design and Production of Effective Instructional Videos

    Science.gov (United States)

    Di Paolo, Terry; Wakefield, Jenny S.; Mills, Leila A.; Baker, Laura

    2017-01-01

    This paper outlines a rudimentary process intended to guide faculty in K-12 and higher education through the steps involved to produce video for their classes. The process comprises four steps: planning, development, delivery and reflection. Each step is infused with instructional design information intended to support the collaboration between…

  11. Using High Speed Smartphone Cameras and Video Analysis Techniques to Teach Mechanical Wave Physics

    Science.gov (United States)

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-01-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses…

  12. Use of a UAV-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Science.gov (United States)

    Interest in use of unmanned aerial vehicles in science has increased in recent years. It is predicted that they will be a preferred remote sensing platform for applications that inform sustainable rangeland management in the future. The objective of this study was to determine whether UAV video moni...

  13. Real-time multi-camera video acquisition and processing platform for ADAS

    Science.gov (United States)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  14. Surgical video recording with a modified GoPro Hero 4 camera

    OpenAIRE

    Lin LK

    2016-01-01

    Lily Koo Lin Department of Ophthalmology and Vision Science, University of California, Davis Eye Center, Sacramento, CA, USA Background: Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Me...

  15. Measurement and processing of signatures in the visible range using a calibrated video camera and the CAMDET software package

    Science.gov (United States)

    Sheffer, Dan

    1997-06-01

    A procedure for calibration of a color video camera has been developed at EORD. The RGB values of standard samples, together with the spectral radiance values of the samples, are used to calculate a transformation matrix between the RGB and CIEXYZ color spaces. The transformation matrix is then used to calculate the XYZ color coordinates of distant objects imaged in the field. These, in turn, are used in order to calculate the CIELAB color coordinates of the objects. Good agreement between the calculated coordinates and those obtained from spectroradiometric data is achieved. Processing of the RGB values of pixels in the digital image of a scene using the CAMDET software package which was developed at EORD, results in `Painting Maps' in which the true apparent CIELAB color coordinates are used. The paper discusses the calibration procedure, its advantages and shortcomings and suggests a definition for the visible signature of objects. The Camdet software package is described and some examples are given.

  16. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    Science.gov (United States)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  17. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    Science.gov (United States)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  18. A Refrigerated Web Camera for Photogrammetric Video Measurement inside Biomass Boilers and Combustion Analysis

    Directory of Open Access Journals (Sweden)

    Enrique Granada

    2011-01-01

    Full Text Available This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes.

  19. Using high speed smartphone cameras and video analysis techniques to teach mechanical wave physics

    Science.gov (United States)

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-07-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses along a spring and the period of transverse standing waves generated in the same spring. These experiments can be helpful in addressing several relevant concepts about the physics of mechanical waves and in overcoming some of the typical student misconceptions in this same field.

  20. Social interactions of juvenile brown boobies at sea as observed with animal-borne video cameras.

    Directory of Open Access Journals (Sweden)

    Ken Yoda

    Full Text Available While social interactions play a crucial role on the development of young individuals, those of highly mobile juvenile birds in inaccessible environments are difficult to observe. In this study, we deployed miniaturised video recorders on juvenile brown boobies Sula leucogaster, which had been hand-fed beginning a few days after hatching, to examine how social interactions between tagged juveniles and other birds affected their flight and foraging behaviour. Juveniles flew longer with congeners, especially with adult birds, than solitarily. In addition, approximately 40% of foraging occurred close to aggregations of congeners and other species. Young seabirds voluntarily followed other birds, which may directly enhance their foraging success and improve foraging and flying skills during their developmental stage, or both.

  1. Incremental activity modeling in multiple disjoint cameras.

    Science.gov (United States)

    Loy, Chen Change; Xiang, Tao; Gong, Shaogang

    2012-09-01

    Activity modeling and unusual event detection in a network of cameras is challenging, particularly when the camera views are not overlapped. We show that it is possible to detect unusual events in multiple disjoint cameras as context-incoherent patterns through incremental learning of time delayed dependencies between distributed local activities observed within and across camera views. Specifically, we model multicamera activities using a Time Delayed Probabilistic Graphical Model (TD-PGM) with different nodes representing activities in different decomposed regions from different views and the directed links between nodes encoding their time delayed dependencies. To deal with visual context changes, we formulate a novel incremental learning method for modeling time delayed dependencies that change over time. We validate the effectiveness of the proposed approach using a synthetic data set and videos captured from a camera network installed at a busy underground station.

  2. Shooting History: An interview with Swiss artist Christoph Draeger about the re-enactment of terrorism in his video installation Black September (2002

    Directory of Open Access Journals (Sweden)

    Sebastian Baden

    2016-02-01

    Full Text Available This contribution introduces to the video installation Black September (2002 by Swiss artist Christoph Draeger and presents statements of the artist given in an interview in 2012. Draeger collects media representations of disasters in order to reconfigure their inherent sensationalism later in his artworks. The video installation Black September consists of appropriated footage from a documentary movie and video sequences from a re-enactment of the historical events of September 5th 1972, the terrorist attack during the 20th Olympic Games in Munich. Even the artist himself gets involved in the play in his mimikry of a hostage-taker and terrorist. Thus he questions the conditions of the mutual constitution of cultural memory and collective memory. His video installation creates a “counter image” in reaction to the “omnipresent myth of terrorism”, generated by the tragedy of 9/11 and the media reports in its aftermath. Both terrorist attacks, in Munich 1972 and in New York 2001, mark a turning point in the visual dominance of terrorism. In the case of September 11th, the recurring images of the airplane-attacks and the explosion of the WTC, followed by its collapsing, symbolize the legacy of the “terror of attention”, that would affect every spectator. The video questions the limits of the “disaster zone” in fictional reality and mass media. The artwork re-creates central scenes of the event in 1972. It brings the terrorist action close to the spectator through emersive images, but technically obtains a critical distance through its mode of reflection upon the catastrophe.The installation Black September stimulates and simulates history and memory simultaneously. It fills the void of a traumatic narrative and tries to recapture the signs that have been unknown yet.

  3. Video systems for alarm assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greenwoll, D.A.; Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States)); Ebel, P.E. (BE, Inc., Barnwell, SC (United States))

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs.

  4. Comparison of handheld video camera and GAITRite® measurement of gait impairment in people with early stage Parkinson's disease: a pilot study.

    Science.gov (United States)

    Beijer, Tim R; Lord, Stephen R; Brodie, Matthew A D

    2013-01-01

    In this pilot study, we investigated the validity and reliability of low-cost handheld video camera recordings for measuring gait in people with early stage Parkinson's disease (PD). Five participants with PD, Hoehn & Yahr stage I-II, mean age 66.2 years and five healthy age-matched controls were recruited. Participants walked across a GAITRite® electronic walkway at self-selected pace while video was simultaneously recorded. Data from both systems were analyzed and compared. Step time variability, measured from handheld video recordings, revealed significant (p ≤ 0.05) differences between the gait of early stage PD and controls. Concurrent validity between video analyses and GAITRite were good (ICC(2,1) ≥ 0.86) for mean step time and mean dual support duration. However, the inter-assessor reliability for the video analysis was poor for step time variability (ICC(2,1) = 0.18). More reliable measurement of step time variability may require a system to measure extended periods of walking. Further research involving longer walks and more participants with higher stages of PD is required to investigate if step time variability can be measured with acceptable reliability using video recordings. If this could be demonstrated, this simple technology could be adapted to run on a tablet or smart phone, providing low cost gait assessments without the need for specialized equipment and expensive infrastructure.

  5. Soft X-ray studies on MST: Measuring the effects of toroidicity on tearing mode phase and installation of a multi-energy camera

    Science.gov (United States)

    Vanmeter, Patrick; Reusch, Lisa; Franz, Paolo; Sarff, John; Goetz, John; Delgado-Aparicio, Louis; den Hartog, Daniel

    2017-10-01

    The soft X-ray tomography (SXT) system on MST uses four cameras in a double-filter configuration to measure the emitted brightness along forty distinct lines of sight. These measurements can then be inverted to determine the emissivity, which depends on physical properties such as temperature, density, and impurity content. The SXR emissivity should correspond to the structure of the magnetic field; however, there is a discrepancy between the phase of the emissivity inversions and magnetic field reconstructions when using the typical cylindrical approximation to interpret the signal from the toroidal magnetics array. This discrepancy was measured for two distinct plasma conditions using all four SXT cameras, with results supporting the interpretation that it emerges from physical effects of the toroidal geometry. In addition, a new soft x-ray measurement system based on the PILATUS3 photon counting detector will be installed on MST. Emitted photons are counted by an array of pixels with individually adjustable energy cutoffs giving the device more spectral information than the double-filter system. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences program under Award Numbers DE-FC02-05ER54814 and DE-SC0015474.

  6. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  7. Video signals integrator (VSI) system architecture

    Science.gov (United States)

    Kasprowicz, Grzegorz; Pastuszak, Grzegorz; Poźniak, Krzysztof; Trochimiuk, Maciej; Abramowski, Andrzej; Gaska, Michal; Bukowiecka, Danuta; Tyburska, Agata; Struniawski, Jarosław; Jastrzebski, Pawel; Jewartowski, Blazej; Frasunek, Przemysław; Nalbach-Moszynska, Małgorzata; Brawata, Sebastian; Bubak, Iwona; Gloza, Małgorzata

    2016-09-01

    The purpose of the project is development of a platform which integrates video signals from many sources. The signals can be sourced by existing analogue CCTV surveillance installations, recent internet-protocol (IP) cameras or single cameras of any type. The system will consist of portable devices that provide conversion, encoding, transmission and archiving. The sharing subsystem will use distributed file system and also user console which provides simultaneous access to any of video streams in real time. The system is fully modular so its extension is possible, both from hardware and software side. Due to standard modular technology used, partial technology modernization is also possible during a long exploitation period.

  8. Initial evaluation of prospective cardiac triggering using photoplethysmography signals recorded with a video camera compared to pulse oximetry and electrocardiography at 7T MRI.

    Science.gov (United States)

    Spicher, Nicolai; Kukuk, Markus; Maderwald, Stefan; Ladd, Mark E

    2016-11-24

    Accurate synchronization between magnetic resonance imaging data acquisition and a subject's cardiac activity ("triggering") is essential for reducing image artifacts but conventional, contact-based methods for this task are limited by several factors, including preparation time, patient inconvenience, and susceptibility to signal degradation. The purpose of this work is to evaluate the performance of a new contact-free triggering method developed with the aim to eventually replace conventional methods in non-cardiac imaging applications. In this study, the method's performance is evaluated in the context of 7 Tesla non-enhanced angiography of the lower extremities. Our main contribution is a basic algorithm capable of estimating in real-time the phase of the cardiac cycle from reflection photoplethysmography signals obtained from skin color variations of the forehead recorded with a video camera. Instead of finding the algorithm's parameters heuristically, they were optimized using videos of the forehead as well as electrocardiography and pulse oximetry signals that were recorded from eight healthy volunteers in and outside the scanner, with and without active radio frequency and gradient coils. Based on the video characteristics, synthetic signals were generated and the "best available" values of an objective function were determined using mathematical optimization. The performance of the proposed method with optimized algorithm parameters was evaluated by applying it to the recorded videos and comparing the computed triggers to those of contact-based methods. Additionally, the method was evaluated by using its triggers for acquiring images from a healthy volunteer and comparing the result to images obtained using pulse oximetry triggering. During evaluation of the videos recorded inside the bore with active radio frequency and gradient coils, the pulse oximeter triggers were labeled in 62.5% as "potentially usable" for cardiac triggering, the electrocardiography

  9. Video Analytics for Business Intelligence

    CERN Document Server

    Porikli, Fatih; Xiang, Tao; Gong, Shaogang

    2012-01-01

    Closed Circuit TeleVision (CCTV) cameras have been increasingly deployed pervasively in public spaces including retail centres and shopping malls. Intelligent video analytics aims to automatically analyze content of massive amount of public space video data and has been one of the most active areas of computer vision research in the last two decades. Current focus of video analytics research has been largely on detecting alarm events and abnormal behaviours for public safety and security applications. However, increasingly CCTV installations have also been exploited for gathering and analyzing business intelligence information, in order to enhance marketing and operational efficiency. For example, in retail environments, surveillance cameras can be utilised to collect statistical information about shopping behaviour and preference for marketing (e.g., how many people entered a shop; how many females/males or which age groups of people showed interests to a particular product; how long did they stay in the sho...

  10. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    Science.gov (United States)

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik

    2017-03-01

    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  11. What does video-camera framing say during the news? A look at contemporary forms of visual journalism

    Directory of Open Access Journals (Sweden)

    Juliana Freire Gutmann

    2012-12-01

    Full Text Available In order to contribute to the discussion about audiovisual processing of journalistic information, this article examines connections between the uses of video framing on the television news stage, contemporary senses, public interest and the distinction values of journalism, addressed here through the perspective of the concepts of conversation and participation. The article identifies recurring video framing techniques used by 15 Brazilian television newscasts, accounting for contemporary forms of audiovisual telejournalism, responsible for new types of spatial-temporal configurations. From a methodological perspective, this article seeks to contribute to the study of the television genre by understanding the uses of these audiovisual techniques as a strategy for newscast communicability.

  12. WHAT DOES VIDEO-CAMERA FRAMING SAY DURING THE NEWS? A LOOK AT CONTEMPORARY FORMS OF VISUAL JOURNALISM

    Directory of Open Access Journals (Sweden)

    Juliana Freire Gutmann

    2013-06-01

    Full Text Available In order to contribute to the discussion about audiovisual processing of journalistic information, this article examines connections between the uses of video framing on the television news stage, contemporary senses, public interest and the distinction values of journalism, addressed here through the perspective of the concepts of conversation and participation. The article identifies recurring video framing techniques used by 15 Brazilian television newscasts, accounting for contemporary forms of audiovisual telejournalism, responsible for new types of spatial-temporal configurations. From a methodological perspective, this article seeks to contribute to the study of the television genre by understanding the uses of these audiovisual techniques as a strategy for newscast communicability.

  13. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    Science.gov (United States)

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  14. Potential of video cameras in assessing event and seasonal coastline behaviour: Grand Popo, Benin (Gulf of Guinea)

    NARCIS (Netherlands)

    Abessolo Ondoa, G.; Almar, R.; Kestenare, E.; Bahini, A.; Houngue, G-H.; Jouanno, J; Du Penhoat, Y.; Castelle, B.; Melet, A.; Meyssignac, B.; Anthony, E.J.; Laibi, R.; Alory, G.; Ranasinghe, Ranasinghe W M R J B

    2016-01-01

    In this study, we explore the potential of a nearshore video system to obtain a long-term estimation of coastal variables (shoreline, beach slope, sea level elevation and wave forcing) at Grand Popo beach, Benin, West Africa, from March 2013 to February 2015. We first present a validation of the

  15. The Effect of Smartphone Video Camera as a Tool to Create Gigital Stories for English Learning Purposes

    Science.gov (United States)

    Gromik, Nicolas A.

    2015-01-01

    The integration of smartphones in the language learning environment is gaining research interest. However, using a smartphone to learn to speak spontaneously has received little attention. The emergence of smartphone technology and its video recording feature are recognised as suitable learning tools. This paper reports on a case study conducted…

  16. What Does the Camera Communicate? An Inquiry into the Politics and Possibilities of Video Research on Learning

    Science.gov (United States)

    Vossoughi, Shirin; Escudé, Meg

    2016-01-01

    This piece explores the politics and possibilities of video research on learning in educational settings. The authors (a research-practice team) argue that changing the stance of inquiry from "surveillance" to "relationship" is an ongoing and contingent practice that involves pedagogical, political, and ethical choices on the…

  17. Use of an unmanned aerial vehicle-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Science.gov (United States)

    We determined the feasibility of using unmanned aerial vehicle (UAV) video monitoring to predict intake of discrete food items of rangeland-raised Raramuri Criollo non-nursing beef cows. Thirty-five cows were released into a 405-m2 rectangular dry lot, either in pairs (pilot tests) or individually (...

  18. Risk analysis of a video-surveillance system

    NARCIS (Netherlands)

    Rothkrantz, L.; Lefter, I.

    2011-01-01

    The paper describes a surveillance system of cameras installed at lamppost of a military area. The surveillance system has been designed to detect unwanted visitors or suspicious behaviors. The area is composed of streets, building blocks and surrounded by gates and water. The video recordings are

  19. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  20. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  1. Estimation of skeletal movement of human locomotion from body surface shapes using dynamic spatial video camera (DSVC) and 4D human model.

    Science.gov (United States)

    Saito, Toshikuni; Suzuki, Naoki; Hattori, Asaki; Suzuki, Shigeyuki; Hayashibe, Mitsuhiro; Otake, Yoshito

    2006-01-01

    We have been developing a DSVC (Dynamic Spatial Video Camera) system to measure and observe human locomotion quantitatively and freely. A 4D (four-dimensional) human model with detailed skeletal structure, joint, muscle, and motor functionality has been built. The purpose of our research was to estimate skeletal movements from body surface shapes using DSVC and the 4D human model. For this purpose, we constructed a body surface model of a subject and resized the standard 4D human model to match with geometrical features of the subject's body surface model. Software that integrates the DSVC system and the 4D human model, and allows dynamic skeletal state analysis from body surface movement data was also developed. We practically applied the developed system in dynamic skeletal state analysis of a lower limb in motion and were able to visualize the motion using geometrically resized standard 4D human model.

  2. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  3. Video Head Impulse Tests with a Remote Camera System: Normative Values of Semicircular Canal Vestibulo-Ocular Reflex Gain in Infants and Children

    Directory of Open Access Journals (Sweden)

    Sylvette R. Wiener-Vacher

    2017-09-01

    Full Text Available The video head impulse test (VHIT is widely used to identify semicircular canal function impairments in adults. But classical VHIT testing systems attach goggles tightly to the head, which is not tolerated by infants. Remote video detection of head and eye movements resolves this issue and, here, we report VHIT protocols and normative values for children. Vestibulo-ocular reflex (VOR gain was measured for all canals of 303 healthy subjects, including 274 children (aged 2.6 months–15 years and 26 adults (aged 16–67. We used the Synapsys® (Marseilles, France VHIT Ulmer system whose remote camera measures head and eye movements. HITs were performed at high velocities. Testing typically lasts 5–10 min. In infants as young as 3 months old, VHIT yielded good inter-measure replicability. VOR gain increases rapidly until about the age of 6 years (with variation among canals, then progresses more slowly to reach adult values by the age of 16. Values are more variable among very young children and for the vertical canals, but showed no difference for right versus left head rotations. Normative values of VOR gain are presented to help detect vestibular impairment in patients. VHIT testing prior to cochlear implants could help prevent total vestibular loss and the resulting grave impairments of motor and cognitive development in patients with residual unilateral vestibular function.

  4. Body worn camera

    Science.gov (United States)

    Aishwariya, A.; Pallavi Sudhir, Gulavani; Garg, Nemesa; Karthikeyan, B.

    2017-11-01

    A body worn camera is small video camera worn on the body, typically used by police officers to record arrests, evidence from crime scenes. It helps preventing and resolving complaints brought by members of the public; and strengthening police transparency, performance, and accountability. The main constants of this type of the system are video format, resolution, frames rate, and audio quality. This system records the video in .mp4 format with 1080p resolution and 30 frames per second. One more important aspect to while designing this system is amount of power the system requires as battery management becomes very critical. The main design challenges are Size of the Video, Audio for the video. Combining both audio and video and saving it in .mp4 format, Battery, size that is required for 8 hours of continuous recording, Security. For prototyping this system is implemented using Raspberry Pi model B.

  5. Digital video.

    Science.gov (United States)

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.

  6. Design of IP Camera Access Control Protocol by Utilizing Hierarchical Group Key

    Directory of Open Access Journals (Sweden)

    Jungho Kang

    2015-08-01

    Full Text Available Unlike CCTV, security video surveillance devices, which we have generally known about, IP cameras which are connected to a network either with or without wire, provide monitoring services through a built-in web-server. Due to the fact that IP cameras can use a network such as the Internet, multiple IP cameras can be installed at a long distance and each IP camera can utilize the function of a web server individually. Even though IP cameras have this kind of advantage, it has difficulties in access control management and weakness in user certification, too. Particularly, because the market of IP cameras did not begin to be realized a long while ago, systems which are systematized from the perspective of security have not been built up yet. Additionally, it contains severe weaknesses in terms of access authority to the IP camera web server, certification of users, and certification of IP cameras which are newly installed within a network, etc. This research grouped IP cameras hierarchically to manage them systematically, and provided access control and data confidentiality between groups by utilizing group keys. In addition, IP cameras and users are certified by using PKI-based certification, and weak points of security such as confidentiality and integrity, etc., are improved by encrypting passwords. Thus, this research presents specific protocols of the entire process and proved through experiments that this method can be actually applied.

  7. Traffic camera system development

    Science.gov (United States)

    Hori, Toshi

    1997-04-01

    The intelligent transportation system has generated a strong need for the development of intelligent camera systems to meet the requirements of sophisticated applications, such as electronic toll collection (ETC), traffic violation detection and automatic parking lot control. In order to achieve the highest levels of accuracy in detection, these cameras must have high speed electronic shutters, high resolution, high frame rate, and communication capabilities. A progressive scan interline transfer CCD camera, with its high speed electronic shutter and resolution capabilities, provides the basic functions to meet the requirements of a traffic camera system. Unlike most industrial video imaging applications, traffic cameras must deal with harsh environmental conditions and an extremely wide range of light. Optical character recognition is a critical function of a modern traffic camera system, with detection and accuracy heavily dependent on the camera function. In order to operate under demanding conditions, communication and functional optimization is implemented to control cameras from a roadside computer. The camera operates with a shutter speed faster than 1/2000 sec. to capture highway traffic both day and night. Consequently camera gain, pedestal level, shutter speed and gamma functions are controlled by a look-up table containing various parameters based on environmental conditions, particularly lighting. Lighting conditions are studied carefully, to focus only on the critical license plate surface. A unique light sensor permits accurate reading under a variety of conditions, such as a sunny day, evening, twilight, storms, etc. These camera systems are being deployed successfully in major ETC projects throughout the world.

  8. Developing participatory research in radiology: the use of a graffiti wall, cameras and a video box in a Scottish radiology department.

    Science.gov (United States)

    Mathers, Sandra A; Anderson, Helen; McDonald, Sheila; Chesson, Rosemary A

    2010-03-01

    Participatory research is increasingly advocated for use in health and health services research and has been defined as a 'process of producing new knowledge by systematic enquiry, with the collaboration of those being studied'. The underlying philosophy of participatory research is that those recruited to studies are acknowledged as experts who are 'empowered to truly participate and have their voices heard'. Research methods should enable children to express themselves. This has led to the development of creative approaches of working with children that offer alternatives to, for instance, the structured questioning of children by researchers either through questionnaires or interviews. To examine the feasibility and potential of developing participatory methods in imaging research. We employed three innovative methods of data collection sequentially, namely the provision of: 1) a graffiti wall; 2) cameras, and 3) a video box for children's use. While the graffiti wall was open to all who attended the department, for the other two methods children were allocated to each 'arm' consecutively until our target of 20 children for each was met. The study demonstrated that it was feasible to use all three methods of data collection within the context of a busy radiology department. We encountered no complaints from staff, patients or parents. Children were willing to participate but we did not collect data to establish if they enjoyed the activities, were pleased to have the opportunity to make comments or whether anxieties about their treatment inhibited their participation. The data yield was disappointing. In particular, children's contributions to the graffiti wall were limited, but did reflect the nature of graffiti, and there may have been some 'copycat' comments. Although data analysis was relatively straightforward, given the nature of the data (short comments and simple drawings), the process proved to be extremely time-consuming. This was despite the modest

  9. Upgrades to NDSF Vehicle Camera Systems and Development of a Prototype System for Migrating and Archiving Video Data in the National Deep Submergence Facility Archives at WHOI

    Science.gov (United States)

    Fornari, D.; Howland, J.; Lerner, S.; Gegg, S.; Walden, B.; Bowen, A.; Lamont, M.; Kelley, D.

    2003-12-01

    In recent years, considerable effort has been made to improve the visual recording capabilities of Alvin and ROV Jason. This has culminated in the routine use of digital cameras, both internal and external on these vehicles, which has greatly expanded the scientific recording capabilities of the NDSF. The UNOLS National Deep Submergence Facility (NDSF) archives maintained at Woods Hole Oceanograpic Institution (WHOI) are the repository for the diverse suite of photographic still images (both 35mm and recently digital), video imagery, vehicle data and navigation, and near-bottom side-looking sonar data obtained by the facility vehicles. These data comprise a unique set of information from a wide range of seafloor environments over the more than 25 years of NDSF operations in support of science. Included in the holdings are Alvin data plus data from the tethered vehicles- ROV Jason, Argo II, and the DSL-120 side scan sonar. This information conservatively represents an outlay in facilities and science costs well in excess of \\$100 million. Several archive related improvement issues have become evident over the past few years. The most critical are: 1. migration and better access to the 35mm Alvin and Jason still images through digitization and proper cataloging with relevant meta-data, 2. assessing Alvin data logger data, migrating data on older media no longer in common use, and properly labeling and evaluating vehicle attitude and navigation data, 3. migrating older Alvin and Jason video data, especially data recorded on Hi-8 tape that is very susceptible to degradation on each replay, to newer digital format media such as DVD, 4. improving the capabilities of the NDSF archives to better serve the increasingly complex needs of the oceanographic community, including researchers involved in focused programs like Ridge2000 and MARGINS, where viable distributed databases in various disciplinary topics will form an important component of the data management structure

  10. Networked telepresence system using web browsers and omni-directional video streams

    Science.gov (United States)

    Ishikawa, Tomoya; Yamazawa, Kazumasa; Sato, Tomokazu; Ikeda, Sei; Nakamura, Yutaka; Fujikawa, Kazutoshi; Sunahara, Hideki; Yokoya, Naokazu

    2005-03-01

    In this paper, we describe a new telepresence system which enables a user to look around a virtualized real world easily in network environments. The proposed system includes omni-directional video viewers on web browsers and allows the user to look around the omni-directional video contents on the web browsers. The omni-directional video viewer is implemented as an Active-X program so that the user can install the viewer automatically only by opening the web site which contains the omni-directional video contents. The system allows many users at different sites to look around the scene just like an interactive TV using a multi-cast protocol without increasing the network traffic. This paper describes the implemented system and the experiments using live and stored video streams. In the experiment with stored video streams, the system uses an omni-directional multi-camera system for video capturing. We can look around high resolution and high quality video contents. In the experiment with live video streams, a car-mounted omni-directional camera acquires omni-directional video streams surrounding the car, running in an outdoor environment. The acquired video streams are transferred to the remote site through the wireless and wired network using multi-cast protocol. We can see the live video contents freely in arbitrary direction. In the both experiments, we have implemented a view-dependent presentation with a head-mounted display (HMD) and a gyro sensor for realizing more rich presence.

  11. Developing participatory research in radiology: the use of a graffiti wall, cameras and a video box in a Scottish radiology department

    Energy Technology Data Exchange (ETDEWEB)

    Mathers, Sandra A. [Aberdeen Royal Infirmary, Department of Radiology, Aberdeen (United Kingdom); The Robert Gordon University, Faculty of Health and Social Care, Aberdeen (United Kingdom); Anderson, Helen [Royal Aberdeen Children' s Hospital, Department of Radiology, Aberdeen (United Kingdom); McDonald, Sheila [Royal Aberdeen Children' s Hospital, Aberdeen (United Kingdom); Chesson, Rosemary A. [University of Aberdeen, School of Medicine and Dentistry, Aberdeen (United Kingdom)

    2010-03-15

    Participatory research is increasingly advocated for use in health and health services research and has been defined as a 'process of producing new knowledge by systematic enquiry, with the collaboration of those being studied'. The underlying philosophy of participatory research is that those recruited to studies are acknowledged as experts who are 'empowered to truly participate and have their voices heard'. Research methods should enable children to express themselves. This has led to the development of creative approaches of working with children that offer alternatives to, for instance, the structured questioning of children by researchers either through questionnaires or interviews. To examine the feasibility and potential of developing participatory methods in imaging research. We employed three innovative methods of data collection sequentially, namely the provision of: 1) a graffiti wall; 2) cameras, and 3) a video box for children's use. While the graffiti wall was open to all who attended the department, for the other two methods children were allocated to each 'arm' consecutively until our target of 20 children for each was met. The study demonstrated that it was feasible to use all three methods of data collection within the context of a busy radiology department. We encountered no complaints from staff, patients or parents. Children were willing to participate but we did not collect data to establish if they enjoyed the activities, were pleased to have the opportunity to make comments or whether anxieties about their treatment inhibited their participation. The data yield was disappointing. In particular, children's contributions to the graffiti wall were limited, but did reflect the nature of graffiti, and there may have been some 'copycat' comments. Although data analysis was relatively straightforward, given the nature of the data (short comments and simple drawings), the process proved to be

  12. Flip Video for Dummies

    CERN Document Server

    Hutsko, Joe

    2010-01-01

    The full-color guide to shooting great video with the Flip Video camera. The inexpensive Flip Video camera is currently one of the hottest must-have gadgets. It's portable and connects easily to any computer to transfer video you shoot onto your PC or Mac. Although the Flip Video camera comes with a quick-start guide, it lacks a how-to manual, and this full-color book fills that void! Packed with full-color screen shots throughout, Flip Video For Dummies shows you how to shoot the best possible footage in a variety of situations. You'll learn how to transfer video to your computer and then edi

  13. Technical assessment of Navitar Zoom 6000 optic and Sony HDC-X310 camera for MEMS presentations and training.

    Energy Technology Data Exchange (ETDEWEB)

    Diegert, Carl F.

    2006-02-01

    This report evaluates a newly-available, high-definition, video camera coupled with a zoom optical system for microscopic imaging of micro-electro-mechanical systems. We did this work to support configuration of three document-camera-like stations as part of an installation in a new Microsystems building at Sandia National Laboratories. The video display walls to be installed as part of these three presentation and training stations are of extraordinary resolution and quality. The new availability of a reasonably-priced, cinema-quality, high-definition video camera offers the prospect of filling these displays with full-motion imaging of Sandia's microscopic products at a quality substantially beyond the quality of typical video microscopes. Simple and robust operation of the microscope stations will allow the extraordinary-quality imaging to contribute to Sandia's day-to-day research and training operations. This report illustrates the disappointing image quality from a camera/lens system comprised of a Sony HDC-X310 high-definition video camera coupled to a Navitar Zoom 6000 lens. We determined that this Sony camera is capable of substantially more image quality than the Navitar optic can deliver. We identified an optical doubler lens from Navitar as the component of their optical system that accounts for a substantial part of the image quality problem. While work continues to incrementally improve performance of the Navitar system, we are also evaluating optical systems from other vendors to couple to this Sony camera.

  14. Tracing Sequential Video Production

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin; Khalid, Md. Saifuddin

    2015-01-01

    With an interest in learning that is set in collaborative situations, the data session presents excerpts from video data produced by two of fifteen students from a class of 5th semester techno-anthropology course. Students used video cameras to capture the time they spent working with a scientist...... video, nature of the interactional space, and material and spatial semiotics....

  15. Make a Pinhole Camera

    Science.gov (United States)

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  16. Fazendo 3d com uma camera so

    CERN Document Server

    Lunazzi, J J

    2010-01-01

    A simple system to make stereo photography or videos based in just two mirrors was made in 1989 and recently adapted to a digital camera setup. Um sistema simples para fazer fotografia ou videos em estereo baseado em dois espelhos que dividem o campo da imagem foi criado no ano 1989, e recentemente adaptado para camera digital.

  17. Teaching residents pediatric fiberoptic intubation of the trachea: traditional fiberscope with an eyepiece versus a video-assisted technique using a fiberscope with an integrated camera.

    Science.gov (United States)

    Wheeler, Melissa; Roth, Andrew G; Dsida, Richard M; Rae, Bronwyn; Seshadri, Roopa; Sullivan, Christine L; Heffner, Corri L; Coté, Charles J

    2004-10-01

    The authors' hypothesis was that a video-assisted technique should speed resident skill acquisition for flexible fiberoptic oral tracheal intubation (FI) of pediatric patients because the attending anesthesiologist can provide targeted instruction when sharing the view of the airway as the resident attempts intubation. Twenty Clinical Anesthesia year 2 residents, novices in pediatric FI, were randomly assigned to either the traditional group (traditional eyepiece FI) or the video group (video-assisted FI). One of two attending anesthesiologists supervised each resident during FI of 15 healthy children, aged 1-6 yr. The time from mask removal to confirmation of endotracheal tube placement by end-tidal carbon dioxide detection was recorded. Intubation attempts were limited to 3 min; up to three attempts were allowed. The primary outcome measure, time to success or failure, was compared between groups. Failure rate and number of attempts were also compared between groups. Three hundred patient intubations were attempted; eight failed. On average, the residents in the video group were faster, were three times more likely to successfully intubate at any given time during an attempt, and required fewer attempts per patient compared to those in the traditional group. The video system seems to be superior for teaching residents fiberoptic intubation in children.

  18. SEFIS Video Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is a fishery-independent survey that collects data on reef fish in southeast US waters using multiple gears, including chevron traps, video cameras, ROVs,...

  19. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    Science.gov (United States)

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  20. Hierarchical video surveillance architecture: a chassis for video big data analytics and exploration

    Science.gov (United States)

    Ajiboye, Sola O.; Birch, Philip; Chatwin, Christopher; Young, Rupert

    2015-03-01

    There is increasing reliance on video surveillance systems for systematic derivation, analysis and interpretation of the data needed for predicting, planning, evaluating and implementing public safety. This is evident from the massive number of surveillance cameras deployed across public locations. For example, in July 2013, the British Security Industry Association (BSIA) reported that over 4 million CCTV cameras had been installed in Britain alone. The BSIA also reveal that only 1.5% of these are state owned. In this paper, we propose a framework that allows access to data from privately owned cameras, with the aim of increasing the efficiency and accuracy of public safety planning, security activities, and decision support systems that are based on video integrated surveillance systems. The accuracy of results obtained from government-owned public safety infrastructure would improve greatly if privately owned surveillance systems `expose' relevant video-generated metadata events, such as triggered alerts and also permit query of a metadata repository. Subsequently, a police officer, for example, with an appropriate level of system permission can query unified video systems across a large geographical area such as a city or a country to predict the location of an interesting entity, such as a pedestrian or a vehicle. This becomes possible with our proposed novel hierarchical architecture, the Fused Video Surveillance Architecture (FVSA). At the high level, FVSA comprises of a hardware framework that is supported by a multi-layer abstraction software interface. It presents video surveillance systems as an adapted computational grid of intelligent services, which is integration-enabled to communicate with other compatible systems in the Internet of Things (IoT).

  1. Coordinated Sensing in Intelligent Camera Networks

    OpenAIRE

    Ding, Chong

    2013-01-01

    The cost and size of video sensors has led to camera networks becoming pervasive in our lives. However, the ability to analyze these images efficiently is very much a function of the quality of the acquired images. Human control of pan-tilt-zoom (PTZ) cameras is impractical and unreliable when high quality images are needed of multiple events distributed over a large area. This dissertation considers the problem of automatically controlling the fields of view of individual cameras in a camera...

  2. Dynamic Camera Positioning and Reconfiguration for Multi-Camera Networks

    OpenAIRE

    Konda, Krishna Reddy

    2015-01-01

    The large availability of different types of cameras and lenses, together with the reduction in price of video sensors, has contributed to a widespread use of video surveillance systems, which have become a widely adopted tool to enforce security and safety, in detecting and preventing crimes and dangerous events. The possibility for personalization of such systems is generally very high, letting the user customize the sensing infrastructure, and deploying ad-hoc solutions based on the curren...

  3. An Innovative Streaming Video System With a Point-of-View Head Camera Transmission of Surgeries to Smartphones and Tablets: An Educational Utility.

    Science.gov (United States)

    Chaves, Rafael Oliveira; de Oliveira, Pedro Armando Valente; Rocha, Luciano Chaves; David, Joacy Pedro Franco; Ferreira, Sanmari Costa; Santos, Alex de Assis Santos Dos; Melo, Rômulo Müller Dos Santos; Yasojima, Edson Yuzur; Brito, Marcus Vinicius Henriques

    2017-10-01

    In order to engage medical students and residents from public health centers to utilize the telemedicine features of surgery on their own smartphones and tablets as an educational tool, an innovative streaming system was developed with the purpose of streaming live footage from open surgeries to smartphones and tablets, allowing the visualization of the surgical field from the surgeon's perspective. The current study aims to describe the results of an evaluation on level 1 of Kirkpatrick's Model for Evaluation of the streaming system usage during gynecological surgeries, based on the perception of medical students and gynecology residents. Consisted of a live video streaming (from the surgeon's point of view) of gynecological surgeries for smartphones and tablets, one for each volunteer. The volunteers were able to connect to the local wireless network, created by the streaming system, through an access password and watch the video transmission on a web browser on their smartphones. Then, they answered a Likert-type questionnaire containing 14 items about the educational applicability of the streaming system, as well as comparing it to watching an in loco procedure. This study is formally approved by the local ethics commission (Certificate No. 53175915.7.0000.5171/2016). Twenty-one volunteers participated, totalizing 294 items answered, in which 94.2% were in agreement with the items affirmative, 4.1% were neutral, and only 1.7% answers corresponded to negative impressions. Cronbach's α was .82, which represents a good reliability level. Spearman's coefficients were highly significant in 4 comparisons and moderately significant in the other 20 comparisons. This study presents a local streaming video system of live surgeries to smartphones and tablets and shows its educational utility, low cost, and simple usage, which offers convenience and satisfactory image resolution, thus being potentially applicable in surgical teaching.

  4. Representing videos in tangible products

    Science.gov (United States)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  5. System of video observation for electron beam welding process

    Science.gov (United States)

    Laptenok, V. D.; Seregin, Y. N.; Bocharov, A. N.; Murygin, A. V.; Tynchenko, V. S.

    2016-04-01

    Equipment of video observation system for electron beam welding process was developed. Construction of video observation system allows to reduce negative effects on video camera during the process of electron beam welding and get qualitative images of this process.

  6. Installation Art

    DEFF Research Database (Denmark)

    Petersen, Anne Ring

    . In Installation Art: Between Image and Stage, Anne Ring Petersen aims to change that. She begins by exploring how installation art developed into an interdisciplinary genre in the 1960s, and how its intertwining of the visual and the performative has acted as a catalyst for the generation of new artistic...... trajectory of the book is directed by a movement aimed at addressing a series of basic questions that get at the heart of what installation art is and how it is defined: How does installation structure time, space and representation? How does it address and engage its viewers? And how does it draw...

  7. Using the OOI Cabled Array HD Camera to Explore Geophysical and Oceanographic Problems at Axial Seamount

    Science.gov (United States)

    Crone, T. J.; Knuth, F.; Marburg, A.

    2016-12-01

    A broad array of Earth science problems can be investigated using high-definition video imagery from the seafloor, ranging from those that are geological and geophysical in nature, to those that are biological and water-column related. A high-definition video camera was installed as part of the Ocean Observatory Initiative's core instrument suite on the Cabled Array, a real-time fiber optic data and power system that stretches from the Oregon Coast to Axial Seamount on the Juan de Fuca Ridge. This camera runs a 14-minute pan-tilt-zoom routine 8 times per day, focusing on locations of scientific interest on and near the Mushroom vent in the ASHES hydrothermal field inside the Axial caldera. The system produces 13 GB of lossless HD video every 3 hours, and at the time of this writing it has generated 2100 recordings totaling 28.5 TB since it began streaming data into the OOI archive in August of 2015. Because of the large size of this dataset, downloading the entirety of the video for long timescale investigations is not practical. We are developing a set of user-side tools for downloading single frames and frame ranges from the OOI HD camera raw data archive to aid users interested in using these data for their research. We use these tools to download about one year's worth of partial frame sets to investigate several questions regarding the hydrothermal system at ASHES, including the variability of bacterial "floc" in the water-column, and changes in high temperature fluid fluxes using optical flow techniques. We show that while these user-side tools can facilitate rudimentary scientific investigations using the HD camera data, a server-side computing environment that allows users to explore this dataset without downloading any raw video will be required for more advanced investigations to flourish.

  8. High Speed Digital Camera Technology Review

    Science.gov (United States)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  9. Active spectral imaging nondestructive evaluation (SINDE) camera

    Energy Technology Data Exchange (ETDEWEB)

    Simova, E.; Rochefort, P.A., E-mail: eli.simova@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)

    2016-06-15

    A proof-of-concept video camera for active spectral imaging nondestructive evaluation has been demonstrated. An active multispectral imaging technique has been implemented in the visible and near infrared by using light emitting diodes with wavelengths spanning from 400 to 970 nm. This shows how the camera can be used in nondestructive evaluation to inspect surfaces and spectrally identify materials and corrosion. (author)

  10. Cameras Monitor Spacecraft Integrity to Prevent Failures

    Science.gov (United States)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  11. Feasibility of Radon projection acquisition for compressive imaging in MMW region based new video rate 16×16 GDD FPA camera

    Science.gov (United States)

    Levanon, Assaf; Konstantinovsky, Michael; Kopeika, Natan S.; Yitzhaky, Yitzhak; Stern, A.; Turak, Svetlana; Abramovich, Amir

    2015-05-01

    In this article we present preliminary results for the combination of two interesting fields in the last few years: 1) Compressed imaging (CI), which is a joint sensing and compressing process, that attempts to exploit the large redundancy in typical images in order to capture fewer samples than usual. 2) Millimeter Waves (MMW) imaging. MMW based imaging systems are required for a large variety of applications in many growing fields such as medical treatments, homeland security, concealed weapon detection, and space technology. Moreover, the possibility to create a reliable imaging in low visibility conditions such as heavy cloud, smoke, fog and sandstorms in the MMW region, generate high interest from military groups in order to be ready for new combat. The lack of inexpensive room temperature imaging sensors makes it difficult to provide a suitable MMW system for many of the above applications. A system based on Glow Discharge Detector (GDD) Focal Plane Arrays (FPA) can be very efficient in real time imaging with significant results. The GDD is located in free space and it can detect MMW radiation almost isotropically. In this article, we present a new approach of reconstruction MMW imaging by rotation scanning of the target. The Collection process here, based on Radon projections allows implementation of the compressive sensing principles into the MMW region. Feasibility of concept was obtained as radon line imaging results. MMW imaging results with our resent sensor are also presented for the first time. The multiplexing frame rate of 16×16 GDD FPA permits real time video rate imaging of 30 frames per second and comprehensive 3D MMW imaging. It uses commercial GDD lamps with 3mm diameter, Ne indicator lamps as pixel detectors. Combination of these two fields should make significant improvement in MMW region imaging research, and new various of possibilities in compressing sensing technique.

  12. Security Camera System can be access into mobile with internet from remote place

    Directory of Open Access Journals (Sweden)

    Dr. Khanna SamratVivekanand Omprakash

    2012-01-01

    Full Text Available This paper represents how camera can captured the images and video into the database and then it may transformed to the mobile with help of Internet. Developing mobile applications how the data can be viewed on the mobile from the remote place. By assigning real IP to the storage device from ISP and connected to the internet . Developing mobile applications on windows mobile which runs only on the windows mobile . Wireless camera in terms of 4 , 8, 12, 16 are connected with the system. Windows based application develop for 4 , 8 , 12,16 channels to see at a time on desktop computer . The PC is connected with internet and having Client server application which is connected to the Windows Web hosting Server through the internet. With the help of ISP server we can assign IP to the Window Web Server with domain name . Domain name will be access from the world. By developing mobile applications on web we can access it on mobile . Separate setup of windows .exe develop for the Windows Mobile phone to access the information from the server. Client setup can be installed on the mobile and it fetches the data from server and server is based on real IP with domain name and connected with Internet . Digital Wireless cameras are connected & data is stored in Digital Video Recorder having 1 Terabyte of hard disk with different channel like 4, 8, 12,16. We can see Video output in mobile by installing the client setup or by accessing directly from web browser which supports the application for mobile. The beauty of this software is that we can access security camera system into the mobile with internet from remote place.

  13. Installing Omeka

    Directory of Open Access Journals (Sweden)

    Jonathan Reeve

    2016-07-01

    Full Text Available Omeka.net is a useful service for Omeka beginners, but there are a few reasons why you might want to install your own copy of Omeka. Reasons include: * Upgrades. By installing Omeka yourself, you can use the latest versions of Omeka as soon as they’re released, without having to wait for Omeka.net to upgrade their system. * Plugins and themes. You can install any plugin or theme you want, without being restricted to those provided by Omeka.net. * Customizations. You can buy a custom domain name, and customize your code to achieve your desired functionality. * Control. You have control over your own backups, and you can update the server yourself so that its security is always up-to-date. * Price. There are many low-cost Virtual Private Servers (VPSs now, some of which cost only $5 per month. * Storage. Many shared hosting providers now offer unlimited storage. This is useful if you have a large media library. In this tutorial, we’ll be entering a few commands on the command line. This tutorial assumes no prior knowledge of the command line, but if you want a concise primer, consult the Programming Historian introduction to BASH. There are other ways of installing Omeka, of course, some using exclusively GUI tools. Some hosting providers even offer “one-click installs” via their control panels. Many of those methods, however, will install older versions of Omeka which are then harder to upgrade and maintain. The method outlined below may not be the easiest way to install Omeka, but it will give you some good practice with using the command line, which is a skill that will be useful if you want to manually upgrade your install, or manually install other web frameworks. (For example, this installation method is very similar to WordPress’s “Five-Minute Install”. There are four steps to this process, and it should take about an hour.

  14. The NEAT Camera Project

    Science.gov (United States)

    Jr., Ray L. Newburn

    1995-01-01

    The NEAT (Near Earth Asteroid Tracking) camera system consists of a camera head with a 6.3 cm square 4096 x 4096 pixel CCD, fast electronics, and a Sun Sparc 20 data and control computer with dual CPUs, 256 Mbytes of memory, and 36 Gbytes of hard disk. The system was designed for optimum use with an Air Force GEODSS (Ground-based Electro-Optical Deep Space Surveillance) telescope. The GEODSS telescopes have 1 m f/2.15 objectives of the Ritchey-Chretian type, designed originally for satellite tracking. Installation of NEAT began July 25 at the Air Force Facility on Haleakala, a 3000 m peak on Maui in Hawaii.

  15. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    Science.gov (United States)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  16. Infrared Camera

    Science.gov (United States)

    1997-01-01

    A sensitive infrared camera that observes the blazing plumes from the Space Shuttle or expendable rocket lift-offs is capable of scanning for fires, monitoring the environment and providing medical imaging. The hand-held camera uses highly sensitive arrays in infrared photodetectors known as quantum well infrared photo detectors (QWIPS). QWIPS were developed by the Jet Propulsion Laboratory's Center for Space Microelectronics Technology in partnership with Amber, a Raytheon company. In October 1996, QWIP detectors pointed out hot spots of the destructive fires speeding through Malibu, California. Night vision, early warning systems, navigation, flight control systems, weather monitoring, security and surveillance are among the duties for which the camera is suited. Medical applications are also expected.

  17. People detection in nuclear plants by video processing for safety purpose

    Energy Technology Data Exchange (ETDEWEB)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A., E-mail: calexandre@ien.gov.b, E-mail: mol@ien.gov.b [Instituto de Engenharia Nuclear (IEN/CNEN), Rio de Janeiro, RJ (Brazil); Seixas, Jose M.; Silva, Eduardo Antonio B., E-mail: seixas@lps.ufrj.b, E-mail: eduardo@lps.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Eletrica; Cota, Raphael E.; Ramos, Bruno L., E-mail: brunolange@poli.ufrj.b [Universidade Federal do Rio de Janeiro (EP/UFRJ), RJ (Brazil). Dept. de Engenharia Eletronica e de Computacao

    2011-07-01

    This work describes the development of a surveillance system for safety purposes in nuclear plants. The final objective is to track people online in videos, in order to estimate the dose received by personnel, during the execution of working tasks in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a real nuclear plant at Instituto de Engenharia Nuclear, Argonauta nuclear research reactor. Cameras have been installed within Argonauta's room, supplying the data needed. Both video processing and statistical signal processing techniques may be used for detection, segmentation and tracking people in video. This first paper reports people segmentation in video using background subtraction, by two different approaches, namely frame differences, and blind signal separation based on the independent component analysis method. Results are commented, along with perspectives for further work. (author)

  18. Cost effective system for monitoring of fish migration with a camera

    Science.gov (United States)

    Sečnik, Matej; Brilly, Mitja; Vidmar, Andrej

    2016-04-01

    Within the European LIFE project Ljubljanica connects (LIFE10 NAT/SI/000142) we have developed a cost-effective solution for the monitoring of fish migration through the fish passes with the underwater camera. In the fish pass at Ambrožev trg and in the fish pass near the Fužine castle we installed a video camera called "Fishcam" to be able to monitor the migration of fish through the fish passes and success of its reconstruction. Live stream from fishcams installed in the fishpassesis available on our project website (http://ksh.fgg.uni-lj.si/ljubljanicaconnects/ang/12_camera). The system for the fish monitoring is made from two parts. First is the waterproof box for the computer with charger and the second part is the camera itself. We used a high sensitive Sony analogue camera. The advantage of this camera is that it has very good sensitivity in low light conditions, so it can take good quality pictures even at night with a minimum additional lighting. For the night recording we use additional IR reflector to illuminate passing fishes. The camera is connected to an 8-inch tablet PC. We decided to use a tablet PC because it is quite small, cheap, it is relatively fast and has a low power consumption. On the computer we use software which has advanced motion detection capabilities, so we can also detect the small fishes. When the fish is detected by a software, its photograph is automatically saved to local hard drive and for backup also on Google drive. The system for monitoring of fish migration has turned out to work very well. From the beginning of monitoring in June 2015 to end of the year there were more than 100.000 photographs produced. The first analysis of them was already prepared estimating fish species and their frequency in passing the fish pass.

  19. Installation Art

    DEFF Research Database (Denmark)

    Petersen, Anne Ring

    . In Installation Art: Between Image and Stage, Anne Ring Petersen aims to change that. She begins by exploring how installation art developed into an interdisciplinary genre in the 1960s, and how its intertwining of the visual and the performative has acted as a catalyst for the generation of new artistic...... trajectory of the book is directed by a movement aimed at addressing a series of basic questions that get at the heart of what installation art is and how it is defined: How does installation structure time, space and representation? How does it address and engage its viewers? And how does it draw...... in the surrounding world to become part of the work? Featuring the work of such well-known artists as Bruce Nauman, Pipilotti Rist, Ilya Kabakov and many others, this book breaks crucial new ground in understanding the conceptual underpinnings of this multifarious art form....

  20. CCD Camera

    Science.gov (United States)

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  1. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  2. Installation Art

    DEFF Research Database (Denmark)

    Petersen, Anne Ring

    Despite its large and growing popularity – to say nothing of its near-ubiquity in the world’s art scenes and international exhibitions of contemporary art –installation art remains a form whose artistic vocabulary and conceptual basis have rarely been subjected to thorough critical examination....... In Installation Art: Between Image and Stage, Anne Ring Petersen aims to change that. She begins by exploring how installation art developed into an interdisciplinary genre in the 1960s, and how its intertwining of the visual and the performative has acted as a catalyst for the generation of new artistic...... phenomena. It investigates how it became one of today’s most widely used art forms, increasingly expanding into consumer, popular and urban cultures, where installation’s often spectacular appearance ensures that it meets contemporary demands for sense-provoking and immersive cultural experiences. The main...

  3. Installation Art

    DEFF Research Database (Denmark)

    Petersen, Anne Ring

    . In Installation Art: Between Image and Stage, Anne Ring Petersen aims to change that. She begins by exploring how installation art developed into an interdisciplinary genre in the 1960s, and how its intertwining of the visual and the performative has acted as a catalyst for the generation of new artistic......Despite its large and growing popularity – to say nothing of its near-ubiquity in the world’s art scenes and international exhibitions of contemporary art –installation art remains a form whose artistic vocabulary and conceptual basis have rarely been subjected to thorough critical examination...... phenomena. It investigates how it became one of today’s most widely used art forms, increasingly expanding into consumer, popular and urban cultures, where installation’s often spectacular appearance ensures that it meets contemporary demands for sense-provoking and immersive cultural experiences. The main...

  4. Deep-Sky Video Astronomy

    CERN Document Server

    Massey, Steve

    2009-01-01

    A guide to using modern integrating video cameras for deep-sky viewing and imaging with the kinds of modest telescopes available commercially to amateur astronomers. It includes an introduction and a brief history of the technology and camera types. It examines the pros and cons of this unrefrigerated yet highly efficient technology

  5. Online camera-gyroscope autocalibration for cell phones.

    Science.gov (United States)

    Jia, Chao; Evans, Brian L

    2014-12-01

    The gyroscope is playing a key role in helping estimate 3D camera rotation for various vision applications on cell phones, including video stabilization and feature tracking. Successful fusion of gyroscope and camera data requires that the camera, gyroscope, and their relative pose to be calibrated. In addition, the timestamps of gyroscope readings and video frames are usually not well synchronized. Previous paper performed camera-gyroscope calibration and synchronization offline after the entire video sequence has been captured with restrictions on the camera motion, which is unnecessarily restrictive for everyday users to run apps that directly use the gyroscope. In this paper, we propose an online method that estimates all the necessary parameters, whereas a user is capturing video. Our contributions are: 1) simultaneous online camera self-calibration and camera-gyroscope calibration based on an implicit extended Kalman filter and 2) generalization of the multiple-view coplanarity constraint on camera rotation in a rolling shutter camera model for cell phones. The proposed method is able to estimate the needed calibration and synchronization parameters online with all kinds of camera motion and can be embedded in gyro-aided applications, such as video stabilization and feature tracking. Both Monte Carlo simulation and cell phone experiments show that the proposed online calibration and synchronization method converge fast to the ground truth values.

  6. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Jong Hyun Kim

    2017-05-01

    Full Text Available Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1 and two open databases (Korea advanced institute of science and technology (KAIST and computer vision center (CVC databases, as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  7. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors.

    Science.gov (United States)

    Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung

    2017-05-08

    Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  8. Installation Art

    DEFF Research Database (Denmark)

    Petersen, Anne Ring

    Despite its large and growing popularity – to say nothing of its near-ubiquity in the world’s art scenes and international exhibitions of contemporary art –installation art remains a form whose artistic vocabulary and conceptual basis have rarely been subjected to thorough critical examination. I...... in the surrounding world to become part of the work? Featuring the work of such well-known artists as Bruce Nauman, Pipilotti Rist, Ilya Kabakov and many others, this book breaks crucial new ground in understanding the conceptual underpinnings of this multifarious art form.......Despite its large and growing popularity – to say nothing of its near-ubiquity in the world’s art scenes and international exhibitions of contemporary art –installation art remains a form whose artistic vocabulary and conceptual basis have rarely been subjected to thorough critical examination....... In Installation Art: Between Image and Stage, Anne Ring Petersen aims to change that. She begins by exploring how installation art developed into an interdisciplinary genre in the 1960s, and how its intertwining of the visual and the performative has acted as a catalyst for the generation of new artistic...

  9. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  10. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  11. Digital Video Teach Yourself VISUALLY

    CERN Document Server

    Watson, Lonzell

    2010-01-01

    Tips and techniques for shooting and sharing superb digital videos. Never before has video been more popular-or more accessible to the home photographer. Now you can create YouTube-worthy, professional-looking video, with the help of this richly illustrated guide. In a straightforward, simple, highly visual format, Teach Yourself VISUALLY Digital Video demystifies the secrets of great video. With colorful screenshots and illustrations plus step-by-step instructions, the book explains the features of your camera and their capabilities, and shows you how to go beyond "auto" to manually

  12. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  13. Desain dan Implementasi Aplikasi Video Surveillance System Berbasis Web-SIG

    Directory of Open Access Journals (Sweden)

    I M.O. Widyantara

    2015-06-01

    Full Text Available Video surveillance system (VSS is an monitoring system based-on IP-camera. VSS implemented in live streaming and serves to observe and monitor a site remotely. Typically, IP- camera in the VSS has a management software application. However, for ad hoc applications, where the user wants to manage VSS independently, application management software has become ineffective. In the IP-camera installation spread over a large area, an administrator would be difficult to describe the location of the IP-camera. In addition, monitoring an area of IP- Camera will also become more difficult. By looking at some of the flaws in VSS, this paper has proposed a VSS application for easy monitoring of each IP Camera. Applications that have been proposed to integrate the concept of web-based geographical information system with the Google Maps API (Web-GIS. VSS applications built with smart features include maps ip-camera, live streaming of events, information on the info window and marker cluster. Test results showed that the application is able to display all the features built well

  14. Stationary Stereo-Video Camera Stations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Accurate and precise stock assessments are predicated on accurate and precise estimates of life history parameters, abundance, and catch across the range of the...

  15. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  16. TRAFFIC SIGN RECOGNATION WITH VIDEO PROCESSING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Musa AYDIN

    2013-01-01

    Full Text Available In this study, traffic signs are aimed to be recognized and identified from a video image which is taken through a video camera. To accomplish our aim, a traffic sign recognition program has been developed in MATLAB/Simulink environment. The target traffic sign are recognized in the video image with the developed program.

  17. Development of a camera casing suited for cryogenic and vacuum applications

    Science.gov (United States)

    Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.

    2013-12-01

    We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

  18. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  19. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  20. Calibration of Action Cameras for Photogrammetric Purposes

    Directory of Open Access Journals (Sweden)

    Caterina Balletti

    2014-09-01

    Full Text Available The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a easy to handle, (b capable of performing under extreme conditions and more importantly (c able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  1. NFC - Narrow Field Camera

    Science.gov (United States)

    Koukal, J.; Srba, J.; Gorková, S.

    2015-01-01

    We have been introducing a low-cost CCTV video system for faint meteor monitoring and here we describe the first results from 5 months of two-station operations. Our system called NFC (Narrow Field Camera) with a meteor limiting magnitude around +6.5mag allows research on trajectories of less massive meteoroids within individual parent meteor showers and the sporadic background. At present 4 stations (2 pairs with coordinated fields of view) of NFC system are operated in the frame of CEMeNt (Central European Meteor Network). The heart of each NFC station is a sensitive CCTV camera Watec 902 H2 and a fast cinematographic lens Meopta Meostigmat 1/50 - 52.5 mm (50 mm focal length and fixed aperture f/1.0). In this paper we present the first results based on 1595 individual meteors, 368 of which were recorded from two stations simultaneously. This data set allows the first empirical verification of theoretical assumptions for NFC system capabilities (stellar and meteor magnitude limit, meteor apparent brightness distribution and accuracy of single station measurements) and the first low mass meteoroid trajectory calculations. Our experimental data clearly showed the capabilities of the proposed system for low mass meteor registration and for calculations based on NFC data to lead to a significant refinement in the orbital elements for low mass meteoroids.

  2. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  3. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  4. A Semi-Automatic, Remote-Controlled Video Observation System for Transient Luminous Events

    Science.gov (United States)

    Allin, T.; Neubert, T.; Laursen, S.; Rasmussen, I. L.; Soula, S.

    2003-12-01

    In support for global ELF/VLF observations, HF measurements in France, and conjugate photometry/VLF observations in South Africa, we developed and operated a semi-automatic, remotely controlled video system for the observation of middle-atmospheric transient luminous events (TLEs). Installed at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over serial links from a local computer, and the video outputs were distributed to a pair of PCI frame grabbers in the computer. This setup allowed remote users to log in and operate the system over the internet. Event detection software provided means of recording and time-stamping single TLE video fields and thus eliminated the need for continuous human monitoring of TLE activity. The computer recorded and analyzed two parallel video streams at the full 50 Hz field rate, while uploading status images, TLE images, and system logs to a remote web server. The system detected more than 130 TLEs - mostly sprites - distributed over 9 active evenings. We have thus demonstrated the feasibility of remote agents for TLE observations, which are likely to find use in future ground-based TLE observation campaigns, or to be installed at remote sites in support for space-borne or other global TLE observation efforts.

  5. Ground Validation Drop Camera Transect Points - St. Thomas/ St. John USVI - 2011

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video were collected between...

  6. Ground Validation Drop Camera Transect Points - St. Thomas/ St. John USVI - 2011 (NCEI Accession 0131858)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video were collected between...

  7. EgoSampling: Wide View Hyperlapse from Egocentric Videos

    OpenAIRE

    Halperin, Tavi; Poleg, Yair; Arora, Chetan; Peleg, Shmuel

    2016-01-01

    The possibility of sharing one's point of view makes use of wearable cameras compelling. These videos are often long, boring and coupled with extreme shake, as the camera is worn on a moving person. Fast forwarding (i.e. frame sampling) is a natural choice for quick video browsing. However, this accentuates the shake caused by natural head motion in an egocentric video, making the fast forwarded video useless. We propose EgoSampling, an adaptive frame sampling that gives stable, fast forwarde...

  8. Advanced real-time manipulation of video streams

    CERN Document Server

    Herling, Jan

    2014-01-01

    Diminished Reality is a new fascinating technology that removes real-world content from live video streams. This sensational live video manipulation actually removes real objects and generates a coherent video stream in real-time. Viewers cannot detect modified content. Existing approaches are restricted to moving objects and static or almost static cameras and do not allow real-time manipulation of video content. Jan Herling presents a new and innovative approach for real-time object removal with arbitrary camera movements.

  9. Feature Quantization and Pooling for Videos

    Science.gov (United States)

    2014-05-01

    similar. 1.2 Context Video has become a very popular media for communication, entertainment , and science. Videos are widely used in educational...The same approach applied to action classification from YouTube videos of sport events shows that BoW approaches on real world data sets need further...dog videos, where the camera also tracks the people and animals . In Figure 4.38 we compare across action classes how well each segmentation

  10. Digital Low Frequency Radio Camera

    Science.gov (United States)

    Fullekrug, M.; Mezentsev, A.; Soula, S.; van der Velde, O.; Poupeney, J.; Sudre, C.; Gaffet, S.; Pincon, J.

    2012-04-01

    This contribution reports the design, realization and operation of a novel digital low frequency radio camera towards an exploration of the Earth's electromagnetic environment with particular emphasis on lightning discharges and subsequent atmospheric effects such as transient luminous events. The design of the digital low frequency radio camera is based on the idea of radio interferometry with a network of radio receivers which are separated by spatial baselines comparable to the wavelength of the observed radio waves, i.e., ~1-100 km which corresponds to a frequency range from ~3-300 kHz. The key parameter towards the realization of the radio interferometer is the frequency dependent slowness of the radio waves within the Earth's atmosphere with respect to the speed of light in vacuum. This slowness is measured with the radio interferometer by using well documented radio transmitters. The digital low frequency radio camera can be operated in different modes. In the imaging mode, still photographs show maps of the low frequency radio sky. In the video mode, movies show the dynamics of the low frequency radio sky. The exposure time of the photograhps, the frame rate of the video, and the radio frequency of interest can be adjusted by the observer. Alternatively, the digital radio camera can be used in the monitoring mode, where a particular area of the sky is observed continuously. The first application of the digital low frequency radio camera is to characterize the electromagnetic energy emanating from sprite producing lightning discharges, but it is expected that it can also be used to identify and investigate numerous other radio sources of the Earth's electromagnetic environment.

  11. What Counts as Educational Video?: Working toward Best Practice Alignment between Video Production Approaches and Outcomes

    Science.gov (United States)

    Winslett, Greg

    2014-01-01

    The twenty years since the first digital video camera was made commercially available has seen significant increases in the use of low-cost, amateur video productions for teaching and learning. In the same period, production and consumption of professionally produced video has also increased, as has the distribution platforms to access it.…

  12. Reliability of video-based identification of footstrike pattern and video time frame at initial contact in recreational runners

    DEFF Research Database (Denmark)

    Damsted, Camma; Larsen, L H; Nielsen, R.O.

    2015-01-01

    and video time frame at initial contact during treadmill running using two-dimensional (2D) video recordings. METHODS: Thirty-one recreational runners were recorded twice, 1 week apart, with a high-speed video camera. Two blinded raters evaluated each video twice with an interval of at least 14 days...

  13. 4K Video-Laryngoscopy and Video-Stroboscopy: Preliminary Findings.

    Science.gov (United States)

    Woo, Peak

    2016-01-01

    4K video is a new format. At 3840 × 2160 resolution, it has 4 times the resolution of standard 1080 high definition (HD) video. Magnification can be done without loss of resolution. This study uses 4K video for video-stroboscopy. Forty-six patients were examined by conventional video-stroboscopy (digital 3 chip CCD) and compared with 4K video-stroboscopy. The video was recorded on a Blackmagic 4K cinema production camera in CinemaDNG RAW format. The video was played back on a 4K monitor and compared to standard video. Pathological conditions included: polyps, scar, cysts, cancer, sulcus, and nodules. Successful 4K video recordings were achieved in all subjects using a 70° rigid endoscope. The camera system is bulky. The examination is performed similarly to standard video-stroboscopy. Playback requires a 4K monitor. As expected, the images were far clearer in detail than standard video. Stroboscopy video using the 4K camera was consistently able to show more detail. Two patients had diagnosis change after 4K viewing. 4K video is an exciting new technology that can be applied to laryngoscopy. It allows for cinematic 4K quality recordings. Both continuous and stroboscopic light can be used for visualization. Its clinical utility is feasible, but usefulness must be proven. © The Author(s) 2015.

  14. Use of infrared TV cameras built into head-mounted display to measure torsional eye movements.

    Science.gov (United States)

    Ukai, K; Saida, S; Ishikawa, N

    2001-01-01

    The head-mounted display (HMD) has produced conflict between visual and vestibular stimuli because the HMD image does not move with the head motion of the wearer. The HMD can show binocular parallax three-dimensional (3D) images, in which vergence and accommodation conflict. Thus, the HMD may affect the normal visual/vestibular functions. We attempted to develop a system that makes possible the measurement of torsional eye movements, vergence eye movements, and pupillary responses of the HMD wearer. Our apparatus is composed of two infrared CCD cameras installed in the HMD. Iris images produced by these cameras are analyzed by a personal computer using free software. Further, a third camera fixed on the HMD projects an image of the view as the subject sees it, via video tape recorder or frame memory to the HMD. Images can be stored, replayed, or frozen. Our system can measure torsional eye movement with 0.20 degrees resolution every 1/30 (or 1/60) seconds even though the pupil size alternates during measurement. Binocular eye movement and pupillary response are also measured. A system was developed which can be used for assessment of the effect of 3D HMD on the visual system. A third camera coupled with HMD can control visual stimulus independently of head motion (vestibular stimulus).

  15. Low Cost Wireless Network Camera Sensors for Traffic Monitoring

    Science.gov (United States)

    2012-07-01

    Many freeways and arterials in major cities in Texas are presently equipped with video detection cameras to : collect data and help in traffic/incident management. In this study, carefully controlled experiments determined : the throughput and output...

  16. Optimising Camera Traps for Monitoring Small Mammals

    Science.gov (United States)

    Glen, Alistair S.; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera’s field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera’s field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2–2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera’s field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps. PMID:23840790

  17. Collaborative web-based annotation of video footage of deep-sea life, ecosystems and geological processes

    Science.gov (United States)

    Kottmann, R.; Ratmeyer, V.; Pop Ristov, A.; Boetius, A.

    2012-04-01

    More and more seagoing scientific expeditions use video-controlled research platforms such as Remote Operating Vehicles (ROV), Autonomous Underwater Vehicles (AUV), and towed camera systems. These produce many hours of video material which contains detailed and scientifically highly valuable footage of the biological, chemical, geological, and physical aspects of the oceans. Many of the videos contain unique observations of unknown life-forms which are rare, and which cannot be sampled and studied otherwise. To make such video material online accessible and to create a collaborative annotation environment the "Video Annotation and processing platform" (V-App) was developed. A first solely web-based installation for ROV videos is setup at the German Center for Marine Environmental Sciences (available at http://videolib.marum.de). It allows users to search and watch videos with a standard web browser based on the HTML5 standard. Moreover, V-App implements social web technologies allowing a distributed world-wide scientific community to collaboratively annotate videos anywhere at any time. It has several features fully implemented among which are: • User login system for fine grained permission and access control • Video watching • Video search using keywords, geographic position, depth and time range and any combination thereof • Video annotation organised in themes (tracks) such as biology and geology among others in standard or full screen mode • Annotation keyword management: Administrative users can add, delete, and update single keywords for annotation or upload sets of keywords from Excel-sheets • Download of products for scientific use This unique web application system helps making costly ROV videos online available (estimated cost range between 5.000 - 10.000 Euros per hour depending on the combination of ship and ROV). Moreover, with this system each expert annotation adds instantaneous available and valuable knowledge to otherwise uncharted

  18. World's fastest and most sensitive astronomical camera

    Science.gov (United States)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these

  19. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  20. Mobile Video in Everyday Social Interactions

    Science.gov (United States)

    Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

    Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

  1. Development of camera technology for monitoring nests. Chapter 15

    Science.gov (United States)

    W. Andrew Cox; M. Shane Pruett; Thomas J. Benson; Scott J. Chiavacci; Frank R., III Thompson

    2012-01-01

    Photo and video technology has become increasingly useful in the study of avian nesting ecology. However, researchers interested in using camera systems are often faced with insufficient information on the types and relative advantages of available technologies. We reviewed the literature for studies of nests that used cameras and summarized them based on study...

  2. Development and application of remote video monitoring system for combine harvester based on embedded Linux

    Science.gov (United States)

    Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui

    2017-01-01

    Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.

  3. Economical Video Monitoring of Traffic

    Science.gov (United States)

    Houser, B. C.; Paine, G.; Rubenstein, L. D.; Parham, O. Bruce, Jr.; Graves, W.; Bradley, C.

    1986-01-01

    Data compression allows video signals to be transmitted economically on telephone circuits. Telephone lines transmit television signals to remote traffic-control center. Lines also carry command signals from center to TV camera and compressor at highway site. Video system with television cameras positioned at critical points on highways allows traffic controllers to determine visually, almost immediately, exact cause of traffic-flow disruption; e.g., accidents, breakdowns, or spills, almost immediately. Controllers can then dispatch appropriate emergency services and alert motorists to minimize traffic backups.

  4. The Future of Video

    OpenAIRE

    Li, F.

    2016-01-01

    Executive Summary \\ud \\ud A range of technological innovations (e.g. smart phones and digital cameras), infrastructural advances (e.g. broadband and 3G/4G wireless networks) and platform developments (e.g. YouTube, Facebook, Snapchat, Instagram, Amazon, and Netflix) are collectively transforming the way video is produced, distributed, consumed, archived – and importantly, monetised. Changes have been observed well beyond the mainstream TV and film industries, and these changes are increasingl...

  5. Solid State Replacement of Rotating Mirror Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Frank, A M; Bartolick, J M

    2006-08-25

    Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed ''In-situ Storage Image Sensor'' or ''ISIS'', by Prof. Goji Etoh, has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.

  6. Video Stabilization Using Feature Point Matching

    Science.gov (United States)

    Kulkarni, Shamsundar; Bormane, D. S.; Nalbalwar, S. L.

    2017-01-01

    Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper an algorithm is proposed to stabilize jittery videos. A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video is identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.

  7. Towards User Experience-Driven Adaptive Uplink Video Transmission for Automotive Applications

    OpenAIRE

    Lottermann, Christian

    2016-01-01

    The focus of this thesis is to enable user experience-driven uplink video streaming from mobile video sources with limited computational capacity and to apply these to resource-constraint automotive environments. The first part investigates perceptual quality-aware encoding of videos, the second part proposes camera context-based estimators of temporal and spatial activities for videos captured by a front-facing camera of a vehicle, and the last part studies the upstreaming of videos from a m...

  8. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  9. Real-time quality control on a smart camera

    Science.gov (United States)

    Xiao, Chuanwei; Zhou, Huaide; Li, Guangze; Hao, Zhihang

    2006-01-01

    A smart camera is composed of a video sensing, high-level video processing, communication and other affiliations within a single device. Such cameras are very important devices in quality control systems. This paper presents a prototyping development of a smart camera for quality control. The smart camera is divided to four parts: a CMOS sensor, a digital signal processor (DSP), a CPLD and a display device. In order to improving the processing speed, low-level and high-level video processing algorithms are discussed to the embedded DSP-based platforms. The algorithms can quickly and automatic detect productions' quality defaults. All algorithms are tested under a Matlab-based prototyping implementation and migrated to the smart camera. The smart camera prototype automatic processes the video data and streams the results of the video data to the display devices and control devices. Control signals are send to produce-line to adjust the producing state within the required real-time constrains.

  10. Video Conferencing for a Virtual Seminar Room

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Fosgerau, A.; Hansen, Peter Søren K.

    2002-01-01

    A PC-based video conferencing system for a virtual seminar room is presented. The platform is enhanced with DSPs for audio and video coding and processing. A microphone array is used to facilitate audio based speaker tracking, which is used for adaptive beam-forming and automatic camera-control...

  11. Face Recognition and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Swapnil Vitthal Tathe

    2017-07-01

    Full Text Available Advancement in computer vision technology and availability of video capturing devices such as surveillance cameras has evoked new video processing applications. The research in video face recognition is mostly biased towards law enforcement applications. Applications involves human recognition based on face and iris, human computer interaction, behavior analysis, video surveillance etc. This paper presents face tracking framework that is capable of face detection using Haar features, recognition using Gabor feature extraction, matching using correlation score and tracking using Kalman filter. The method has good recognition rate for real-life videos and robust performance to changes due to illumination, environmental factors, scale, pose and orientations.

  12. An Evaluation of Video-to-Video Face Verification

    NARCIS (Netherlands)

    Poh, N.; Chan, C.H.; Kittler, J.; Marcel, S.; Mc Cool, C.; Argones Rúa, E.; Alba Castro, J.L.; Villegas, M.; Paredes, R.; Štruc, V.; Pavešić, N.; Salah, A.A.; Fang, H.; Costen, N.

    2010-01-01

    Person recognition using facial features, e.g., mug-shot images, has long been used in identity documents. However, due to the widespread use of web-cams and mobile devices embedded with a camera, it is now possible to realize facial video recognition, rather than resorting to just still images. In

  13. VLSI-distributed architectures for smart cameras

    Science.gov (United States)

    Wolf, Wayne H.

    2001-03-01

    Smart cameras use video/image processing algorithms to capture images as objects, not as pixels. This paper describes architectures for smart cameras that take advantage of VLSI to improve the capabilities and performance of smart camera systems. Advances in VLSI technology aid in the development of smart cameras in two ways. First, VLSI allows us to integrate large amounts of processing power and memory along with image sensors. CMOS sensors are rapidly improving in performance, allowing us to integrate sensors, logic, and memory on the same chip. As we become able to build chips with hundreds of millions of transistors, we will be able to include powerful multiprocessors on the same chip as the image sensors. We call these image sensor/multiprocessor systems image processors. Second, VLSI allows us to put a large number of these powerful sensor/processor systems on a single scene. VLSI factories will produce large quantities of these image processors, making it cost-effective to use a large number of them in a single location. Image processors will be networked into distributed cameras that use many sensors as well as the full computational resources of all the available multiprocessors. Multiple cameras make a number of image recognition tasks easier: we can select the best view of an object, eliminate occlusions, and use 3D information to improve the accuracy of object recognition. This paper outlines approaches to distributed camera design: architectures for image processors and distributed cameras; algorithms to run on distributed smart cameras, and applications of which VLSI distributed camera systems.

  14. Lights, Camera, AG-Tion: Promoting Agricultural and Environmental Education on Camera

    Science.gov (United States)

    Fuhrman, Nicholas E.

    2016-01-01

    Viewing of online videos and television segments has become a popular and efficient way for Extension audiences to acquire information. This article describes a unique approach to teaching on camera that may help Extension educators communicate their messages with comfort and personality. The S.A.L.A.D. approach emphasizes using relevant teaching…

  15. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  16. Research of Pedestrian Crossing Safety Facilities Based on the Video Detection

    Science.gov (United States)

    Li, Sheng-Zhen; Xie, Quan-Long; Zang, Xiao-Dong; Tang, Guo-Jun

    Since that the pedestrian crossing facilities at present is not perfect, pedestrian crossing is in chaos and pedestrians from opposite direction conflict and congest with each other, which severely affects the pedestrian traffic efficiency, obstructs the vehicle and bringing about some potential security problems. To solve these problems, based on video identification, a pedestrian crossing guidance system was researched and designed. It uses the camera to monitor the pedestrians in real time and sums up the number of pedestrians through video detection program, and a group of pedestrian's induction lamp array is installed at the interval of crosswalk, which adjusts color display according to the proportion of pedestrians from both sides to guide pedestrians from both opposite directions processing separately. The emulation analysis result from cellular automaton shows that the system reduces the pedestrian crossing conflict, shortens the time of pedestrian crossing and improves the safety of pedestrians crossing.

  17. Video essay

    DEFF Research Database (Denmark)

    2015-01-01

    Camera movement has a profound influence on the way films look and the way films are experienced by spectators. In this visual essay Jakob Isak Nielsen proposes six major functions of camera movement in narrative cinema. Individual camera movements may serve more of these functions at the same ti...

  18. Innovative Video Diagnostic Equipment for Material Science

    Science.gov (United States)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  19. Intelligent Model for Video Survillance Security System

    Directory of Open Access Journals (Sweden)

    J. Vidhya

    2013-12-01

    Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.

  20. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  1. Composing with Images: A Study of High School Video Producers.

    Science.gov (United States)

    Reilly, Brian

    At Bell High School (Los Angeles, California), students have been using video cameras, computers and editing machines to create videos in a variety of forms and on a variety of topics; in this setting, video is the textual medium of expression. A study was conducted using participant-observation and interviewing over the course of one school year…

  2. Teacher Self-Captured Video: Learning to See

    Science.gov (United States)

    Sherin, Miriam Gamoran; Dyer, Elizabeth B.

    2017-01-01

    Videos are often used for demonstration and evaluation, but a more productive approach would be using video to support teachers' ability to notice and interpret classroom interactions. That requires thinking carefully about the physical aspects of shooting video--where the camera is placed and how easily student interactions can be heard--as well…

  3. A Method for Estimating Surveillance Video Georeferences

    Directory of Open Access Journals (Sweden)

    Aleksandar Milosavljević

    2017-07-01

    Full Text Available The integration of a surveillance camera video with a three-dimensional (3D geographic information system (GIS requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM of the area of interest. Once an adequate number of points are matched, Levenberg–Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.

  4. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen...

  5. Digital Pinhole Camera

    Science.gov (United States)

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  6. Diversity-Aware Multi-Video Summarization

    Science.gov (United States)

    Panda, Rameswar; Mithun, Niluthpol Chowdhury; Roy-Chowdhury, Amit K.

    2017-10-01

    Most video summarization approaches have focused on extracting a summary from a single video; we propose an unsupervised framework for summarizing a collection of videos. We observe that each video in the collection may contain some information that other videos do not have, and thus exploring the underlying complementarity could be beneficial in creating a diverse informative summary. We develop a novel diversity-aware sparse optimization method for multi-video summarization by exploring the complementarity within the videos. Our approach extracts a multi-video summary which is both interesting and representative in describing the whole video collection. To efficiently solve our optimization problem, we develop an alternating minimization algorithm that minimizes the overall objective function with respect to one video at a time while fixing the other videos. Moreover, we introduce a new benchmark dataset, Tour20, that contains 140 videos with multiple human created summaries, which were acquired in a controlled experiment. Finally, by extensive experiments on the new Tour20 dataset and several other multi-view datasets, we show that the proposed approach clearly outperforms the state-of-the-art methods on the two problems-topic-oriented video summarization and multi-view video summarization in a camera network.

  7. Delta FUSE Fairing Installation at Launch Complex 17A

    Science.gov (United States)

    1999-01-01

    This NASA Kennedy Space Center (KSC) video release presents footage of the June 19, 1999 installation of the fairing around the Far Ultraviolet Spectroscopic Explorer (FUSE) spacecraft. The spacecraft was previously mated to the Boeing Delta II rocket. Installation took place on Pad A of Launch Complex 17.

  8. Situational Awareness from a Low-Cost Camera System

    Science.gov (United States)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  9. The Use of Camera Traps in Wildlife

    Directory of Open Access Journals (Sweden)

    Yasin Uçarlı

    2013-11-01

    Full Text Available Camera traps are increasingly used in the abundance and density estimates of wildlife species. Camera traps are very good alternative for direct observation in case, particularly, steep terrain, dense vegetation covered areas or nocturnal species. The main reason for the use of camera traps is eliminated that the economic, personnel and time loss in a continuous manner at the same time in different points. Camera traps, motion and heat sensitive, can take a photo or video according to the models. Crossover points and feeding or mating areas of the focal species are addressed as a priority camera trap set locations. The population size can be finding out by the images combined with Capture-Recapture methods. The population density came out the population size divided to effective sampling area size. Mating and breeding season, habitat choice, group structures and survival rates of the focal species can be achieved from the images. Camera traps are very useful to obtain the necessary data about the particularly mysterious species with economically in planning and conservation efforts.

  10. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  11. Production of 360° video : Introduction to 360° video and production guidelines

    OpenAIRE

    Ghimire, Sujan

    2016-01-01

    The main goal of this thesis project is to introduce latest media technology and provide a complete guideline. This project is based on the production of 360° video by using multiple GoPro cameras. This project was the first 360° video project at Helsinki Metropolia University of Applied Sciences. 360° video is a video with a totally different viewing experience and incomparable features on it. 360° x 180° video coverage and active participation from viewers are the best part of this vid...

  12. The Video Mesh: A Data Structure for Image-based Three-dimensional Video Editing

    OpenAIRE

    Chen, Jiawen; Paris, Sylvain; Wang, Jue; Matusik, Wojciech; Cohen, Michael; Durand, Fredo

    2011-01-01

    This paper introduces the video mesh, a data structure for representing video as 2.5D “paper cutouts.” The video mesh allows interactive editing of moving objects and modeling of depth, which enables 3D effects and post-exposure camera control. The video mesh sparsely encodes optical flow as well as depth, and handles occlusion using local layering and alpha mattes. Motion is described by a sparse set of points tracked over time. Each point also stores a depth value. The video mesh is a trian...

  13. Comprehensive Analysis and Evaluation of Background Subtraction Algorithms for Surveillance Video

    National Research Council Canada - National Science Library

    Yan Feng; Shengmei Luo; Yumin Tian; Shuo Deng; Haihong Zheng

    2014-01-01

    .... Then, the algorithms were implemented and tested using different videos with ground truth, such as baseline, dynamic background, camera jitter, and intermittent object motion and shadow scenarios...

  14. MEDIUM-FORMAT CAMERAS AND THEIR USE IN TOPOGRAPHIC MAPPING

    Directory of Open Access Journals (Sweden)

    J. Höhle

    2012-07-01

    Full Text Available Based on practical experiences with large-format aerial cameras the impact of new medium-format digital cameras on topographic mapping tasks is discussed. Two new medium-format cameras are investigated with respect to elevation accuracy, area coverage and image quality. The produced graphs and tables show the potential of these cameras for general mapping tasks. Special attention is given to the image quality of the selected cameras. Applications for the medium-format cameras are discussed. The necessary tools for selected applications are described. The impact of sensors for georeferencing, multi-spectral images, and new matching algo-rithms is also dealt with. Practical investigations are carried out for the production of digital elevation models. A comparison with large-format frame cameras is carried out. It is concluded that the medium-format cameras have a potential for mapping of smaller areas and will be used in future in true orthoimage production, corridor mapping, and updating of maps. Their small dimensions and low weight allow installation in small airplanes, helicopters, and high-end UAVs. The two investigated medium-format cameras are low-cost alternatives for standard mapping tasks and special applications. The detection of changes in topographic databases and DTMs can be carried out by means of those medium-format cameras which can image the same area in four bands of the visible and invisible spectrum of light. Medium-format cameras will play an important role in future mapping tasks.

  15. Simulating low-cost cameras for augmented reality compositing.

    Science.gov (United States)

    Klein, Georg; Murray, David W

    2010-01-01

    Video see-through Augmented Reality adds computer graphics to the real world in real time by overlaying graphics onto a live video feed. To achieve a realistic integration of the virtual and real imagery, the rendered images should have a similar appearance and quality to those produced by the video camera. This paper describes a compositing method which models the artifacts produced by a small low-cost camera, and adds these effects to an ideal pinhole image produced by conventional rendering methods. We attempt to model and simulate each step of the imaging process, including distortions, chromatic aberrations, blur, Bayer masking, noise, sharpening, and color-space compression, all while requiring only an RGBA image and an estimate of camera velocity as inputs.

  16. Control of the movement of a ROV camera; Controle de posicionamento da camera de um ROV

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Alexandre S. de; Dutra, Max Suell [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE); Reis, Ney Robinson S. dos [PETROBRAS, Rio de Janeiro, RJ (Brazil). Centro de Pesquisas; Santos, Auderi V. dos [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil)

    2004-07-01

    The ROV's (Remotely Operated Vehicles) are used for installation and maintenance of underwater exploration systems in the oil industry. These systems are operated in distant areas thus being of essential importance the use of a cameras for the visualization of the work area. The synchronization necessary in the accomplishment of the tasks when operating the manipulator and the movement of the camera for the operator is a complex task. For the accomplishment of this synchronization is presented in this work the analysis of the interconnection of the systems. The concatenation of the systems is made through the interconnection of the electric signals of the proportional valves of the actuators of the manipulator with the signals of the proportional valves of the actuators of the camera. With this interconnection the approach accompaniment of the movement of the manipulator for the camera, keeping the object of the visualization of the field of vision of the operator is obtained. (author)

  17. Evaluating and Implementing JPEG XR Optimized for Video Surveillance

    OpenAIRE

    Yu, Lang

    2010-01-01

    This report describes both evaluation and implementation of the new coming image compression standard JPEG XR. The intention is to determine if JPEG XR is an appropriate standard for IP based video surveillance purposes. Video surveillance, especially IP based video surveillance, currently has an increasing role in the security market. To be a good standard for surveillance, the video stream generated by the camera is required to be low bit-rate, low latency on the network and at the same tim...

  18. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  19. Intelligent video surveillance systems and technology

    CERN Document Server

    Ma, Yunqian

    2009-01-01

    From the streets of London to subway stations in New York City, hundreds of thousands of surveillance cameras ubiquitously collect hundreds of thousands of videos, often running 24/7. How can such vast volumes of video data be stored, analyzed, indexed, and searched? How can advanced video analysis and systems autonomously recognize people and detect targeted activities real-time? Collating and presenting the latest information Intelligent Video Surveillance: Systems and Technology explores these issues, from fundamentals principle to algorithmic design and system implementation.An Integrated

  20. The VISTA infrared camera

    Science.gov (United States)

    Dalton, G. B.; Caldwell, M.; Ward, A. K.; Whalley, M. S.; Woodhouse, G.; Edeson, R. L.; Clark, P.; Beard, S. M.; Gallie, A. M.; Todd, S. P.; Strachan, J. M. D.; Bezawada, N. N.; Sutherland, W. J.; Emerson, J. P.

    2006-06-01

    We describe the integration and test phase of the construction of the VISTA Infrared Camera, a 64 Megapixel, 1.65 degree field of view 0.9-2.4 micron camera which will soon be operating at the cassegrain focus of the 4m VISTA telescope. The camera incorporates sixteen IR detectors and six CCD detectors which are used to provide autoguiding and wavefront sensing information to the VISTA telescope control system.

  1. Streak camera meeting summary

    Energy Technology Data Exchange (ETDEWEB)

    Dolan, Daniel H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bliss, David E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    Streak cameras are important for high-speed data acquisition in single event experiments, where the total recorded information (I) is shared between the number of measurements (M) and the number of samples (S). Topics of this meeting included: streak camera use at the national laboratories; current streak camera production; new tube developments and alternative technologies; and future planning. Each topic is summarized in the following sections.

  2. Airborne Network Camera Standard

    Science.gov (United States)

    2015-06-01

    Optical Systems Group Document 466-15 AIRBORNE NETWORK CAMERA STANDARD DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE...Airborne Network Camera Standard 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER...without the focus of standardization for interoperable command and control, storage, and data streaming has been the airborne network camera systems used

  3. Installation report - Lidar

    DEFF Research Database (Denmark)

    Georgieva Yankova, Ginka; Villanueva, Héctor

    The report describes the installation, configuration and data transfer for the ground-based lidar. The unit is provided by a customer but is installed and operated by DTU while in this project.......The report describes the installation, configuration and data transfer for the ground-based lidar. The unit is provided by a customer but is installed and operated by DTU while in this project....

  4. Camera traps as sensor networks for monitoring animal communities

    OpenAIRE

    Kays, R.W.; Kranstauber, B.; Jansen, P.A.; C. Carbone; Rowcliffe, M.; Fountain, T; Tilak, S.

    2009-01-01

    Studying animal movement and distribution is of critical importance to addressing environmental challenges including invasive species, infectious diseases, climate and land-use change. Motion sensitive camera traps offer a visual sensor to record the presence of a species at a location, recording their movement in the Eulerian sense. Modern digital camera traps that record video present new analytical opportunities, but also new data management challenges. This paper describes our experience ...

  5. Camera Traps as Sensor Networks for Monitoring Animal Communities

    OpenAIRE

    Kays, R.W.; Tilak, S.; Kranstauber, B.; Jansen, P.A.; Carbone, C.; Rowcliff, M.J.; Fountain, T.; Eggert, J.; He, Z.

    2011-01-01

    Studying animal movement and distribution is of critical importance to addressing environmental challenges including invasive species, infectious diseases, climate and land-use change. Motion sensitive camera traps offer a visual sensor to record the presence of a broad range of species providing location – specific information on movement and behavior. Modern digital camera traps that record video present not only new analytical opportunities, but also new data management challenges. This pa...

  6. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  7. NOAA Polyline Shapefile - Drop Camera transects, US Caribbean - Western Puerto Rico - Project NF-07-06-USVI-HAB - (2007), UTM 19N NAD83 (NCEI Accession 0131853)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater photos and/or video that were collected by NOAA scientists using a drop camera system. Photos and/or video were...

  8. NOAA Polyline Shapefile - Drop Camera transects, US Caribbean - Western Puerto Rico - Project NF-07-06-USVI-HAB - (2007), UTM 19N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater photos and/or video that were collected by NOAA scientists using a drop camera system. Photos and/or video were...

  9. NOAA Point Shapefile - Drop Camera transects, US Caribbean - Western Puerto Rico - Project NF-07-06-USVI-HAB - (2007), UTM 19N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater photos and/or video that were collected by NOAA scientists using a drop camera system. Photos and/or video were...

  10. NOAA Point Shapefile - Drop Camera transects, US Caribbean - Western Puerto Rico - Project NF-07-06-USVI-HAB - (2007), UTM 19N NAD83 (NCEI Accession 0131853)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater photos and/or video that were collected by NOAA scientists using a drop camera system. Photos and/or video were...

  11. NOAA Shapefile - Drop Camera Transects Lines, USVI 2011 , Seafloor Characterization of the US Caribbean - Nancy Foster - NF-11-1 (2011), UTM 20N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video was collected between...

  12. NOAA Polyline Shapefile - Drop Camera transects, US Caribbean Virgin Passage and St. John Shelf - Project NF-03-10-USVI-HAB - (2010), UTM 20N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video was collected between...

  13. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...

  14. [Analog gamma camera digitalization computer system].

    Science.gov (United States)

    Rojas, G M; Quintana, J C; Jer, J; Astudillo, S; Arenas, L; Araya, H

    2004-01-01

    Digitalization of analogue gamma cameras systems, using special acquisition boards in microcomputers and appropriate software for acquisition and processing of nuclear medicine images is described in detail. Microcomputer integrated systems interconnected by means of a Local Area Network (LAN) and connected to several gamma cameras have been implemented using specialized acquisition boards. The PIP software (Portable Image Processing) was installed on each microcomputer to acquire and preprocess the nuclear medicine images. A specialized image processing software has been designed and developed for these purposes. This software allows processing of each nuclear medicine exam, in a semiautomatic procedure, and recording of the results on radiological films. . A stable, flexible and inexpensive system which makes it possible to digitize, visualize, process, and print nuclear medicine images obtained from analogue gamma cameras was implemented in the Nuclear Medicine Division. Such a system yields higher quality images than those obtained with analogue cameras while keeping operating costs considerably lower (filming: 24.6%, fixing 48.2% and developing 26%.) Analogue gamma camera systems can be digitalized economically. This system makes it possible to obtain optimal clinical quality nuclear medicine images, to increase the acquisition and processing efficiency, and to reduce the steps involved in each exam.

  15. Camera based low-cost system to monitor hydrological parameters in small catchments

    Science.gov (United States)

    Eltner, Anette; Sardemann, Hannes; Kröhnert, Melanie; Schwalbe, Ellen

    2017-04-01

    Gauging stations in small catchments to measure hydrological parameters are usually solely installed at few selected locations. Thus, extreme events that can evolve rapidly, particularly in small catchments (especially in mountainous areas), potentially causing severe damage, are insufficiently documented eventually leading to difficulties of modeling and forecasting of these events. A conceptual approach using a low-cost camera based alternative is introduced to measure water level, flow velocity and changing river cross sections. Synchronized cameras are used for 3D reconstruction of the water surface, enabling the location of flow velocity vectors measured in video sequences. Furthermore, water levels are measured automatically using an image based approach originally developed for smartphone applications. Additional integration of a thermal sensor can increase the speed and reliability of the water level extraction. Finally, the reconstruction of the water surface as well as the surrounding topography allows for the detection of changing morphology. The introduced approach can help to increase the density of the monitoring system of hydrological parameters in (remote) small catchments and subsequently might be used as warning system for extreme events.

  16. Videography-Based Unconstrained Video Analysis.

    Science.gov (United States)

    Li, Kang; Li, Sheng; Oh, Sangmin; Fu, Yun

    2017-05-01

    Video analysis and understanding play a central role in visual intelligence. In this paper, we aim to analyze unconstrained videos, by designing features and approaches to represent and analyze videography styles in the videos. Videography denotes the process of making videos. The unconstrained videos are defined as the long duration consumer videos that usually have diverse editing artifacts and significant complexity of contents. We propose to construct a videography dictionary, which can be utilized to represent every video clip as a sequence of videography words. In addition to semantic features, such as foreground object motion and camera motion, we also incorporate two novel interpretable features to characterize videography, including the scale information and the motion correlations. We then demonstrate that, by using statistical analysis methods, the unique videography signatures extracted from different events can be automatically identified. For real-world applications, we explore the use of videography analysis for three types of applications, including content-based video retrieval, video summarization (both visual and textual), and videography-based feature pooling. In the experiments, we evaluate the performance of our approach and other methods on a large-scale unconstrained video dataset, and show that the proposed approach significantly benefits video analysis in various ways.

  17. Improved Tracking of Targets by Cameras on a Mars Rover

    Science.gov (United States)

    Kim, Won; Ansar, Adnan; Steele, Robert

    2007-01-01

    A paper describes a method devised to increase the robustness and accuracy of tracking of targets by means of three stereoscopic pairs of video cameras on a Mars-rover-type exploratory robotic vehicle. Two of the camera pairs are mounted on a mast that can be adjusted in pan and tilt; the third camera pair is mounted on the main vehicle body. Elements of the method include a mast calibration, a camera-pointing algorithm, and a purely geometric technique for handing off tracking between different camera pairs at critical distances as the rover approaches a target of interest. The mast calibration is an extension of camera calibration in which the camera images of calibration targets at known positions are collected at various pan and tilt angles. In the camerapointing algorithm, pan and tilt angles are computed by a closed-form, non-iterative solution of inverse kinematics of the mast combined with mathematical models of the cameras. The purely geometric camera-handoff technique involves the use of stereoscopic views of a target of interest in conjunction with the mast calibration.

  18. Kitt Peak speckle camera.

    Science.gov (United States)

    Breckinridge, J B; McAlister, H A; Robinson, W G

    1979-04-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  19. Mars Observer Camera

    OpenAIRE

    Malin, M. C.; Danielson, G. E.; Ingersoll, A. P.; Masursky, H.; J. Veverka(Massachusetts Institute of Technology, Cambridge, U.S.A.); Ravine, M. A.; Soulanille, T.A.

    1992-01-01

    The Mars Observer camera (MOC) is a three-component system (one narrow-angle and two wide-angle cameras) designed to take high spatial resolution pictures of the surface of Mars and to obtain lower spatial resolution, synoptic coverage of the planet's surface and atmosphere. The cameras are based on the “push broom” technique; that is, they do not take “frames” but rather build pictures, one line at a time, as the spacecraft moves around the planet in its orbit. MOC is primarily a telescope f...

  20. Camera as Cultural Critique

    DEFF Research Database (Denmark)

    Suhr, Christian

    2015-01-01

    What does the use of cameras entail for the production of cultural critique in anthropology? Visual anthropological analysis and cultural critique starts at the very moment a camera is brought into the field or existing visual images are engaged. The framing, distances, and interactions between...... researchers, cameras, and filmed subjects already inherently comprise analytical decisions. It is these ethnographic qualities inherent in audiovisual and photographic imagery that make it of particular value to a participatory anthropological enterprise that seeks to resist analytic closure and seeks instead...

  1. Digital Video Stabilization with Inertial Fusion

    OpenAIRE

    Freeman, William John

    2013-01-01

    As computing power becomes more and more available, robotic systems are moving away from active sensors for environmental awareness and transitioning into passive vision sensors.  With the advent of teleoperation and real-time video tracking of dynamic environments, the need to stabilize video onboard mobile robots has become more prevalent. This thesis presents a digital stabilization method that incorporates inertial fusion with a Kalman filter.  The camera motion is derived visually by tra...

  2. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    Directory of Open Access Journals (Sweden)

    Steven Nicholas Graves, MA

    2015-02-01

    Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  3. A Semi-Automatic, Remote-Controlled Video Observation System for Transient Luminous Events

    DEFF Research Database (Denmark)

    Allin, Thomas Højgaard; Neubert, Torsten; Laursen, Steen

    2003-01-01

    at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over...

  4. Electrical installations and regulations

    CERN Document Server

    Whitfield, J F

    1966-01-01

    Electrical Installations and Regulations focuses on the regulations that apply to electrical installations and the reasons for them. Topics covered range from electrical science to alternating and direct current supplies, as well as equipment for providing protection against excess current. Cables, wiring systems, and final subcircuits are also considered, along with earthing, discharge lighting, and testing and inspection.Comprised of 12 chapters, this book begins with an overview of electrical installation work, traits of a good electrician, and the regulations governing installations. The r

  5. Electrical installation calculations advanced

    CERN Document Server

    Kitcher, Christopher

    2013-01-01

    All the essential calculations required for advanced electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practiceA step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3For apprentices and electrical installatio

  6. Electrical installation calculations basic

    CERN Document Server

    Kitcher, Christopher

    2013-01-01

    All the essential calculations required for basic electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practice. A step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3Fo

  7. Interactive Video, The Next Step

    Science.gov (United States)

    Strong, L. R.; Wold-Brennon, R.; Cooper, S. K.; Brinkhuis, D.

    2012-12-01

    Video has the ingredients to reach us emotionally - with amazing images, enthusiastic interviews, music, and video game-like animations-- and it's emotion that motivates us to learn more about our new interest. However, watching video is usually passive. New web-based technology is expanding and enhancing the video experience, creating opportunities to use video with more direct interaction. This talk will look at an Educaton and Outreach team's experience producing video-centric curriculum using innovative interactive media tools from TED-Ed and FlixMaster. The Consortium for Ocean Leadership's Deep Earth Academy has partnered with the Center for Dark Energy Biosphere Investigations (C-DEBI) to send educators and a video producer aboard three deep sea research expeditions to the Juan de Fuca plate to install and service sub-seafloor observatories. This collaboration between teachers, students, scientists and media producers has proved a productive confluence, providing new ways of understanding both ground-breaking science and the process of science itself - by experimenting with new ways to use multimedia during ocean-going expeditions and developing curriculum and other projects post-cruise.

  8. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...

  9. stil113_0401r -- Point coverage of locations of still frames extracted from video imagery which depict sediment types

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  10. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  11. TARGETLESS CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2012-09-01

    Full Text Available In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.

  12. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Robotic and Security Systems Dept.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  13. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  14. Neutron cameras for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P. [ITER San Diego Joint Work Site, La Jolla, CA (United States)] [and others

    1998-12-31

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from {sup 16}N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with {sup 16}N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins.

  15. Resolution of slit-lamp microscopy photography using various cameras.

    Science.gov (United States)

    Ye, Yufeng; Jiang, Hong; Zhang, Huicheng; Karp, Carol L; Zhong, Jianguang; Tao, Aizhu; Shao, Yilei; Wang, Jianhua

    2013-05-01

    To evaluate the resolutions of slit-lamp microscopy photography using various cameras. Evaluation of diagnostic test or technology. Healthy subjects were imaged with these adapted cameras through slit-lamp microscopy. A total of 8 cameras, including 6 custom-mounted slit-lamp cameras and 2 commercial slit-lamp cameras, were tested with standard slit-lamp microscopy devices for imaging of the eye. Various magnifications were used during imaging. A standard resolution test plate was used to test the resolutions at different magnifications. These outcomes were compared with commercial slit-lamp cameras. The main measurements included the display spatial resolutions, image spatial resolutions, and ocular resolutions. The outcome also includes the relationships between resolution and the pixel density of the displays and images. All cameras were successfully adapted to the slit-lamp microscopy, and high-quality ocular images were obtained. Differences in the display spatial resolutions were found among cameras [analysis of variance (ANOVA), Pcameras using the high-definition multimedia interface (HDMI) compared with others, including cameras in smart phones. The display resolutions of smart phone displays were greater than cameras with video graphics array displays. The display spatial resolutions were found as a function of display pixel density (r>0.95, P0.85, Pcameras (ANOVA, P0.98, P0.85, Pcamera yielded the highest image spatial resolution. However, the ocular resolution through binocular viewing of the slit-lamp microscopy was found to have the highest resolution compared with the display and image spatial resolutions of all of the cameras. Several cameras can be adapted with slit-lamp microscopy for ophthalmic imaging, yielding various display and image spatial resolutions. However, the resolution seemed to not be as good as ocular viewing through the slit-lamp biomicroscope.

  16. Video surveillance using JPEG 2000

    Science.gov (United States)

    Dufaux, Frederic; Ebrahimi, Touradj

    2004-11-01

    This paper describes a video surveillance system which is composed of three key components, smart cameras, a server, and clients, connected through IP-networks in wired or wireless configurations. The system has been designed so as to protect the privacy of people under surveillance. Smart cameras are based on JPEG 2000 compression where an analysis module allows for events detection and regions of interest identification. The resulting regions of interest can then be encoded with better quality and scrambled. Compressed video streams are scrambled and signed for the purpose of privacy and data integrity verification using JPSEC compliant methods. The same bitstream may also be protected for robustness to transmission errors based on JPWL compliant methods. The server receives, stores, manages and transmits the video sequences on wired and wireless channels to a variety of clients and users with different device capabilities, channel characteristics and preferences. Use of seamless scalable coding of video sequences prevents any need for transcoding operations at any point in the system.

  17. Development of a High Speed Camera Network to Monitor and Study Lightning (Project RAMMER)

    Science.gov (United States)

    Saraiva, A. V.; Pinto, O.; Santos, H. H.; Saba, M. M.

    2010-12-01

    This work proposes the development and applications of a network of high speed cameras for observation and study of lightning flashes. Four high-speed cameras are being acquired to be part of the RAMMER network. They are capable to record high resolution videos up to 1632 x 1200 pixels at 1000 frames per second. A robust system is being assembled to ensure the safe operation of the cameras in adverse weather conditions and enable the recording of a large number of lightning flashes per storm, larger than the values reported to date. As the amount of physical memory to record only 1 second of data is something like 3 - 4 GBytes, there is no way to make long recordings of thunderstorms, so a triggering system was conceived to address this problem and do the recordings of 2 seconds of data automatically for each lightning flash. The triggering system is an optical/electromagnetic system that is being tested since September/2010 and the whole system is under testing yet. This lightning information from the video recordings will be correlated with data from the sensors of the Brazilian Lightning Detection Network (BrasilDAT), from a network of electric field fast antennas, slow electric field antennas and Field-Mills, as well as with data from the LMA (Lightning Mapping Array) to be installed in 2011 in the cities of Sao Paulo and Sao Jose dos Campos. The following objectives are envisaged: a) make the first three-dimensional reconstructions of the lightning channel with high speed cameras and verify its dependence on the physical conditions associated with each storm; b) to observe almost all CG lightning flashes of a single storm cloud in order to compare the physical characteristics of the CG lightning flashes for different storms and their dependence on physical conditions associated with each storm; c) evaluate the performance of the new sensors of BrasilDAT network in different localities and simultaneously. The schematics of the sensors will be shown here, with

  18. Application of video recording technology to improve husbandry and reproduction in the carmine bee-eater (Merops n. nubicus).

    Science.gov (United States)

    Ferrie, Gina M; Sky, Christy; Schutz, Paul J; Quinones, Glorieli; Breeding, Shawnlei; Plasse, Chelle; Leighty, Katherine A; Bettinger, Tammie L

    2016-01-01

    Incorporating technology with research is becoming increasingly important to enhance animal welfare in zoological settings. Video technology is used in the management of avian populations to facilitate efficient information collection on aspects of avian reproduction that are impractical or impossible to obtain through direct observation. Disney's Animal Kingdom(®) maintains a successful breeding colony of Northern carmine bee-eaters. This African species is a cavity nester, making their nesting behavior difficult to study and manage in an ex situ setting. After initial research focused on developing a suitable nesting environment, our goal was to continue developing methods to improve reproductive success and increase likelihood of chicks fledging. We installed infrared bullet cameras in five nest boxes and connected them to a digital video recording system, with data recorded continuously through the breeding season. We then scored and summarized nesting behaviors. Using remote video methods of observation provided much insight into the behavior of the birds in the colony's nest boxes. We observed aggression between birds during the egg-laying period, and therefore immediately removed all of the eggs for artificial incubation which completely eliminated egg breakage. We also used observations of adult feeding behavior to refine chick hand-rearing diet and practices. Although many video recording configurations have been summarized and evaluated in various reviews, we found success with the digital video recorder and infrared cameras described here. Applying emerging technologies to cavity nesting avian species is a necessary addition to improving management in and sustainability of zoo avian populations. © 2015 Wiley Periodicals, Inc.

  19. Limits on surveillance: frictions, fragilities and failures in the operation of camera surveillance.

    NARCIS (Netherlands)

    Dubbeld, L.

    2004-01-01

    Public video surveillance tends to be discussed in either utopian or dystopian terms: proponents maintain that camera surveillance is the perfect tool in the fight against crime, while critics argue that the use of security cameras is central to the development of a panoptic, Orwellian surveillance

  20. Declarative camera control for automatic cinematography

    Energy Technology Data Exchange (ETDEWEB)

    Christianson, D.B.; Anderson, S.E.; Li-wei He [Univ. of Washington, Seattle, WA (United States)] [and others

    1996-12-31

    Animations generated by interactive 3D computer graphics applications are typically portrayed either from a particular character`s point of view or from a small set of strategically-placed viewpoints. By ignoring camera placement, such applications fail to realize important storytelling capabilities that have been explored by cinematographers for many years. In this paper, we describe several of the principles of cinematography and show how they can be formalized into a declarative language, called the Declarative Camera Control Language (DCCL). We describe the application of DCCL within the context of a simple interactive video game and argue that DCCL represents cinematic knowledge at the same level of abstraction as expert directors by encoding 16 idioms from a film textbook. These idioms produce compelling animations, as demonstrated on the accompanying videotape.

  1. Video enhancement effectiveness for target detection

    Science.gov (United States)

    Simon, Michael; Fischer, Amber; Petrov, Plamen

    2011-05-01

    Unmanned aerial vehicles (UAVs) capture real-time video data of military targets while keeping the warfighter at a safe distance. This keeps soldiers out of harm's way while they perform intelligence, surveillance and reconnaissance (ISR) and close-air support troops in contact (CAS-TIC) situations. The military also wants to use UAV video to achieve force multiplication. One method of achieving effective force multiplication involves fielding numerous UAVs with cameras and having multiple videos processed simultaneously by a single operator. However, monitoring multiple video streams is difficult for operators when the videos are of low quality. To address this challenge, we researched several promising video enhancement algorithms that focus on improving video quality. In this paper, we discuss our video enhancement suite and provide examples of video enhancement capabilities, focusing on stabilization, dehazing, and denoising. We provide results that show the effects of our enhancement algorithms on target detection and tracking algorithms. These results indicate that there is potential to assist the operator in identifying and tracking relevant targets with aided target recognition even on difficult video, increasing the force multiplier effect of UAVs. This work also forms the basis for human factors research into the effects of enhancement algorithms on ISR missions.

  2. Summarization of Surveillance Video Sequences Using Face Quality Assessment

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.; Rahmati, Mohammad

    2011-01-01

    Constant working surveillance cameras in public places, such as airports and banks, produce huge amount of video data. Faces in such videos can be extracted in real time. However, most of these detected faces are either redundant or useless. Redundant information adds computational costs to facial...

  3. What Video Styles can do for User Research

    DEFF Research Database (Denmark)

    Blauhut, Daniela; Buur, Jacob

    2009-01-01

    the video camera actually plays in studying people and establishing design collaboration still exists. In this paper we argue that traditional documentary film approaches like Direct Cinema and Cinéma Vérité show that a purely observational approach may not be most valuable for user research and that video...

  4. Content Area Vocabulary Videos in Multiple Contexts: A Pedagogical Tool

    Science.gov (United States)

    Webb, C. Lorraine; Kapavik, Robin Robinson

    2015-01-01

    The authors challenged pre-service teachers to digitally define a social studies or mathematical vocabulary term in multiple contexts using a digital video camera. The researchers sought to answer the following questions: 1. How will creating a video for instruction affect pre-service teachers' attitudes about teaching with technology, if at all?…

  5. Cellphones in Classrooms Land Teachers on Online Video Sites

    Science.gov (United States)

    Honawar, Vaishali

    2007-01-01

    Videos of teachers that students taped in secrecy are all over online sites like YouTube and MySpace. Angry teachers, enthusiastic teachers, teachers clowning around, singing, and even dancing are captured, usually with camera phones, for the whole world to see. Some students go so far as to create elaborately edited videos, shot over several…

  6. Video laryngoscopy in paediatric anaesthesia in South Africa

    African Journals Online (AJOL)

    2011-01-18

    Jan 18, 2011 ... the CMOS active pixel sensor (CMOS APS) video camera, which is mounted on a laryngoscope blade to generate a view of the anatomical structures. Although video laryngoscopes are based on the same technique as direct laryngoscopy, their use requires a different skill set. The VL blade is inserted in ...

  7. Building 3D Event Logs for Video Investigation

    NARCIS (Netherlands)

    Dang, T.K.; Worring, M.; Bui, T.D.

    2015-01-01

    In scene investigation, creating a video log captured using a handheld camera is more convenient and more complete than taking photos and notes. By introducing video analysis and computer vision techniques, it is possible to build a spatio-temporal representation of the investigation. Such a

  8. Automated Video Quality Assessment for Deep-Sea Video

    Science.gov (United States)

    Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.

    2015-12-01

    Video provides a rich source of data for geophysical analysis, often supplying detailed information about the environment when other instruments may not. This is especially true of deep-sea environments, where direct visual observations cannot be made. As computer vision techniques improve and volumes of video data increase, automated video analysis is emerging as a practical alternative to labor-intensive manual analysis. Automated techniques can be much more sensitive to video quality than their manual counterparts, so performing quality assessment before doing full analysis is critical to producing valid results.Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories that supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea, including video and still cameras. This network of ocean observatories has produced almost 20,000 hours of video (about 38 hours are recorded each day) and an additional 8,000 hours of logs from remotely operated vehicle (ROV) dives. We begin by surveying some ways in which deep-sea video poses challenges for automated analysis, including: 1. Non-uniform lighting: Single, directional, light sources produce uneven luminance distributions and shadows; remotely operated lighting equipment are also susceptible to technical failures. 2. Particulate noise: Turbidity and marine snow are often present in underwater video; particles in the water column can have sharper focus and higher contrast than the objects of interest due to their proximity to the light source and can also influence the camera's autofocus and auto white-balance routines. 3. Color distortion (low contrast): The rate of absorption of light in water varies by wavelength, and is higher overall than in air, altering apparent colors and lowering the contrast of objects at a distance.We also describe measures under development at ONC for detecting and mitigating

  9. Overview of SWIR detectors, cameras, and applications

    Science.gov (United States)

    Hansen, Marc P.; Malchow, Douglas S.

    2008-03-01

    Imaging in the short wave infrared (SWIR) can bring useful contrast to situations and applications where visible or thermal imaging cameras are ineffective. This paper will define the short wave infrared technology and discuss developing imaging applications; then describe newly available 2-D (area) and 1-D (linear) arrays made with indium-gallium-arsenide (InGaAs), while presenting the wide range of applications with images and videos. Applications mentioned will be web inspection of continuous processes such as high temperature manufacturing processes, agricultural raw material cleaning and sorting, plastics recycling of automotive and consumer products, and a growing biological imaging technique, Spectral-Domain Optical Coherence Tomography.

  10. Video Malware - Behavioral Analysis

    Directory of Open Access Journals (Sweden)

    Rajdeepsinh Dodia

    2015-04-01

    Full Text Available Abstract The counts of malware attacks exploiting the internet increasing day by day and has become a serious threat. The latest malware spreading out through the media players embedded using the video clip of funny in nature to lure the end users. Once it is executed and installed then the behavior of the malware is in the malware authors hand. The spread of the malware emulates through Internet USB drives sharing of the files and folders can be anything which makes presence concealed. The funny video named as it connected to the film celebrity where the malware variant was collected from the laptop of the terror outfit organization .It runs in the backend which it contains malicious code which steals the user sensitive information like banking credentials username amp password and send it to the remote host user called command amp control. The stealed data is directed to the email encapsulated in the malicious code. The potential malware will spread through the USB and other devices .In summary the analysis reveals the presence of malicious code in executable video file and its behavior.

  11. Measurement of the nonuniformity of first responder thermal imaging cameras

    Science.gov (United States)

    Lock, Andrew; Amon, Francine

    2008-04-01

    Police, firefighters, and emergency medical personnel are examples of first responders that are utilizing thermal imaging cameras in a very practical way every day. However, few performance metrics have been developed to assist first responders in evaluating the performance of thermal imaging technology. This paper describes one possible metric for evaluating the nonuniformity of thermal imaging cameras. Several commercially available uncooled focal plane array cameras were examined. Because of proprietary property issues, each camera was considered a 'black box'. In these experiments, an extended area black body (18 cm square) was placed very close to the objective lens of the thermal imaging camera. The resultant video output from the camera was digitized at a resolution of 640x480 pixels and a grayscale depth of 10 bits. The nonuniformity was calculated using the standard deviation of the digitized image pixel intensities divided by the mean of those pixel intensities. This procedure was repeated for each camera at several blackbody temperatures in the range from 30° C to 260° C. It has observed that the nonuniformity initially increases with temperature, then asymptotically approaches a maximum value. Nonuniformity is also applied to the calculation of Spatial Frequency Response as well providing a noise floor. The testing procedures described herein are being developed as part of a suite of tests to be incorporated into a performance standard covering thermal imaging cameras for first responders.

  12. The VISTA IR camera

    Science.gov (United States)

    Dalton, Gavin B.; Caldwell, Martin; Ward, Kim; Whalley, Martin S.; Burke, Kevin; Lucas, John M.; Richards, Tony; Ferlet, Marc; Edeson, Ruben L.; Tye, Daniel; Shaughnessy, Bryan M.; Strachan, Mel; Atad-Ettedgui, Eli; Leclerc, Melanie R.; Gallie, Angus; Bezawada, Nagaraja N.; Clark, Paul; Bissonauth, Nirmal; Luke, Peter; Dipper, Nigel A.; Berry, Paul; Sutherland, Will; Emerson, Jim

    2004-09-01

    The VISTA IR Camera has now completed its detailed design phase and is on schedule for delivery to ESO"s Cerro Paranal Observatory in 2006. The camera consists of 16 Raytheon VIRGO 2048x2048 HgCdTe arrays in a sparse focal plane sampling a 1.65 degree field of view. A 1.4m diameter filter wheel provides slots for 7 distinct science filters, each comprising 16 individual filter panes. The camera also provides autoguiding and curvature sensing information for the VISTA telescope, and relies on tight tolerancing to meet the demanding requirements of the f/1 telescope design. The VISTA IR camera is unusual in that it contains no cold pupil-stop, but rather relies on a series of nested cold baffles to constrain the light reaching the focal plane to the science beam. In this paper we present a complete overview of the status of the final IR Camera design, its interaction with the VISTA telescope, and a summary of the predicted performance of the system.

  13. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  14. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification

    Science.gov (United States)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-02-01

    Experimental or operational modal analysis traditionally requires physically-attached wired or wireless sensors for vibration measurement of structures. This instrumentation can result in mass-loading on lightweight structures, and is costly and time-consuming to install and maintain on large civil structures, especially for long-term applications (e.g., structural health monitoring) that require significant maintenance for cabling (wired sensors) or periodic replacement of the energy supply (wireless sensors). Moreover, these sensors are typically placed at a limited number of discrete locations, providing low spatial sensing resolution that is hardly sufficient for modal-based damage localization, or model correlation and updating for larger-scale structures. Non-contact measurement methods such as scanning laser vibrometers provide high-resolution sensing capacity without the mass-loading effect; however, they make sequential measurements that require considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation, optical flow), video camera based measurements have been successfully used for vibration measurements and subsequent modal analysis, based on techniques such as the digital image correlation (DIC) and the point-tracking. However, they typically require speckle pattern or high-contrast markers to be placed on the surface of structures, which poses challenges when the measurement area is large or inaccessible. This work explores advanced computer vision and video processing algorithms to develop a novel video measurement and vision-based operational (output-only) modal analysis method that alleviate the need of structural surface preparation associated with existing vision-based methods and can be implemented in a relatively efficient and autonomous manner with little

  15. Using Photogrammetry to Estimate Tank Waste Volumes from Video

    Energy Technology Data Exchange (ETDEWEB)

    Field, Jim G. [Washington River Protection Solutions, LLC, Richland, WA (United States)

    2013-03-27

    Washington River Protection Solutions (WRPS) contracted with HiLine Engineering & Fabrication, Inc. to assess the accuracy of photogrammetry tools as compared to video Camera/CAD Modeling System (CCMS) estimates. This test report documents the results of using photogrammetry to estimate the volume of waste in tank 241-C-I04 from post-retrieval videos and results using photogrammetry to estimate the volume of waste piles in the CCMS test video.

  16. A tiny VIS-NIR snapshot multispectral camera

    Science.gov (United States)

    Geelen, Bert; Blanch, Carolina; Gonzalez, Pilar; Tack, Nicolaas; Lambrechts, Andy

    2015-03-01

    Spectral imaging can reveal a lot of hidden details about the world around us, but is currently confined to laboratory environments due to the need for complex, costly and bulky cameras. Imec has developed a unique spectral sensor concept in which the spectral unit is monolithically integrated on top of a standard CMOS image sensor at wafer level, hence enabling the design of compact, low cost and high acquisition speed spectral cameras with a high design flexibility. This flexibility has previously been demonstrated by imec in the form of three spectral camera architectures: firstly a high spatial and spectral resolution scanning camera, secondly a multichannel snapshot multispectral camera and thirdly a per-pixel mosaic snapshot spectral camera. These snapshot spectral cameras sense an entire multispectral data cube at one discrete point in time, extending the domain of spectral imaging towards dynamic, video-rate applications. This paper describes the integration of our per-pixel mosaic snapshot spectral sensors inside a tiny, portable and extremely user-friendly camera. Our prototype demonstrator cameras can acquire multispectral image cubes, either of 272x512 pixels over 16 bands in the VIS (470-620nm) or of 217x409 pixels over 25 bands in the VNIR (600-900nm) at 170 cubes per second for normal machine vision illumination levels. The cameras themselves are extremely compact based on Ximea xiQ cameras, measuring only 26x26x30mm, and can be operated from a laptop-based USB3 connection, making them easily deployable in very diverse environments.

  17. QLab 3 show control projects for live performances & installations

    CERN Document Server

    Hopgood, Jeromy

    2013-01-01

    Used from Broadway to Britain's West End, QLab software is the tool of choice for many of the world's most prominent sound, projection, and integrated media designers. QLab 3 Show Control: Projects for Live Performances & Installations is a project-based book on QLab software covering sound, video, and show control. With information on both sound and video system basics and the more advanced functions of QLab such as MIDI show control, new OSC capabilities, networking, video effects, and microphone integration, each chapter's specific projects will allow you to learn the software's capabilitie

  18. HIGH SPEED CAMERA

    Science.gov (United States)

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  19. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet er...... der dog opstået nye muligheder og udfordringer i forhold til at formidle og distribuere forskningsresultater til forskellige målgrupper via video. Samtidig er klassiske metodologiske problematikker som forskerens positionering i forhold til det undersøgte stadig aktuelle. Både klassiske og nye...... problemstillinger diskuteres i kapitlet, som rammesætter diskussionen ud fra forskellige positioneringsmuligheder: formidler, historiefortæller, eller dialogist. Disse positioner relaterer sig til genrer inden for ’akademisk video’. Afslutningsvis præsenteres en metodisk værktøjskasse med redskaber til planlægning...

  20. Conveyor installation tools

    National Research Council Canada - National Science Library

    1984-01-01

    The USBM in order to reduce accidents associated with transporting, installing or maintaining belt conveyors developed 3 devices for use in moving and positioning conveyor components (the saucer skid...

  1. 3D Projection Installations

    DEFF Research Database (Denmark)

    Halskov, Kim; Johansen, Stine Liv; Bach Mikkelsen, Michelle

    2014-01-01

    Three-dimensional projection installations are particular kinds of augmented spaces in which a digital 3-D model is projected onto a physical three-dimensional object, thereby fusing the digital content and the physical object. Based on interaction design research and media studies, this article...... contributes to the understanding of the distinctive characteristics of such a new medium, and identifies three strategies for designing 3-D projection installations: establishing space; interplay between the digital and the physical; and transformation of materiality. The principal empirical case, From...... Fingerplan to Loop City, is a 3-D projection installation presenting the history and future of city planning for the Copenhagen area in Denmark. The installation was presented as part of the 12th Architecture Biennale in Venice in 2010....

  2. Biggest semiconductor installed

    CERN Multimedia

    2008-01-01

    Scientists and technicians at the European Laboratory for Particle Physics, commonly known by its French acronym CERN (Centre Europen pour la Recherche Nuclaire), have completed the installation of the largest semiconductor silicon detector.

  3. Wide angle pinhole camera

    Science.gov (United States)

    Franke, J. M.

    1978-01-01

    Hemispherical refracting element gives pinhole camera 180 degree field-of-view without compromising its simplicity and depth-of-field. Refracting element, located just behind pinhole, bends light coming in from sides so that it falls within image area of film. In contrast to earlier pinhole cameras that used water or other transparent fluids to widen field, this model is not subject to leakage and is easily loaded and unloaded with film. Moreover, by selecting glass with different indices of refraction, field at film plane can be widened or reduced.

  4. Safety effects of fixed speed cameras - An empirical Bayes evaluation.

    Science.gov (United States)

    Høye, Alena

    2015-09-01

    The safety effects of 223 fixed speed cameras that were installed between 2000 and 2010 in Norway were investigated in a before-after empirical Bayes study with control for regression to the mean (RTM). Effects of trend, volumes, and speed limit changes are controlled for as well. On road sections between 100m upstream and 1km downstream of the speed cameras a statistically significant reduction of the number of injury crashes by 22% was found. For killed and severely injured (KSI) and on longer road sections none of the results are statistically significant. However, speed cameras that were installed in 2004 or later were found to reduce injury crashes and the number of KSI on road sections from 100m upstream to both 1km and 3km downstream of the speed cameras. Larger effects were found for KSI than for injury crashes and the effects decrease with increasing distance from the speed cameras. At the camera sites (100m up- and down-stream) crash reductions are smaller and non-significant, but highly uncertain and possibly underestimated. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Leadership for Sustainable Installations

    Science.gov (United States)

    2011-04-01

    able_installations/. Accessed 15 April 2011) Leadership for Sustainable Installations By COL Charles Allen (Ret), U.S. Army War College The...number. 1. REPORT DATE APR 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4. TITLE AND SUBTITLE Leadership for Sustainable...of the Army civilian workforce. During that decade, we followed the mandate of A-76 Commercial Sourcing and focused on developing the Most Efficient

  6. Installation af opvaskemaskine

    DEFF Research Database (Denmark)

    Christiansen, J.; Skibstrup Eriksen, S.; Nielsen, F.

    Denne SBI-anvisning er et led i en serie om modernisering af installationerne i den ældre boligmasse. Den henvender sig til både beboere, husejere, VVS-installatører og andre interesserede. Anvisningen indeholder almene afsnit om valg og placering af opvaskemaskine, sagsforløb ved installation......, forhold til myndigheder, priser, finansieringsmuligheder m.m. Anvisningen indeholder endvidere tekniske afsnit om vandinstallation, afløb og elinstallation i forbindelse med installation af opvaskemaskine....

  7. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...

  8. From different angles : Exploring and applying the design potential of video

    NARCIS (Netherlands)

    Pasman, G.J.

    2012-01-01

    Recent developments in both hardware and software have brought video within the scope of design students as a new visual design tool. Being more and more equipped with cameras, for example in their smartphones, and video editing programs on their computers, they are increasing using video to record

  9. Compact video synopsis via global spatiotemporal optimization.

    Science.gov (United States)

    Nie, Yongwei; Xiao, Chunxia; Sun, Hanqiu; Li, Ping

    2013-10-01

    Video synopsis aims at providing condensed representations of video data sets that can be easily captured from digital cameras nowadays, especially for daily surveillance videos. Previous work in video synopsis usually moves active objects along the time axis, which inevitably causes collisions among the moving objects if compressed much. In this paper, we propose a novel approach for compact video synopsis using a unified spatiotemporal optimization. Our approach globally shifts moving objects in both spatial and temporal domains, which shifting objects temporally to reduce the length of the video and shifting colliding objects spatially to avoid visible collision artifacts. Furthermore, using a multilevel patch relocation (MPR) method, the moving space of the original video is expanded into a compact background based on environmental content to fit with the shifted objects. The shifted objects are finally composited with the expanded moving space to obtain the high-quality video synopsis, which is more condensed while remaining free of collision artifacts. Our experimental results have shown that the compact video synopsis we produced can be browsed quickly, preserves relative spatiotemporal relationships, and avoids motion collisions.

  10. Privacy-protecting video surveillance

    Science.gov (United States)

    Wickramasuriya, Jehan; Alhazzazi, Mohanned; Datt, Mahesh; Mehrotra, Sharad; Venkatasubramanian, Nalini

    2005-02-01

    Forms of surveillance are very quickly becoming an integral part of crime control policy, crisis management, social control theory and community consciousness. In turn, it has been used as a simple and effective solution to many of these problems. However, privacy-related concerns have been expressed over the development and deployment of this technology. Used properly, video cameras help expose wrongdoing but typically come at the cost of privacy to those not involved in any maleficent activity. This work describes the design and implementation of a real-time, privacy-protecting video surveillance infrastructure that fuses additional sensor information (e.g. Radio-frequency Identification) with video streams and an access control framework in order to make decisions about how and when to display the individuals under surveillance. This video surveillance system is a particular instance of a more general paradigm of privacy-protecting data collection. In this paper we describe in detail the video processing techniques used in order to achieve real-time tracking of users in pervasive spaces while utilizing the additional sensor data provided by various instrumented sensors. In particular, we discuss background modeling techniques, object tracking and implementation techniques that pertain to the overall development of this system.

  11. Online scene change detection of multicast (MBone) video

    Science.gov (United States)

    Zhou, Wensheng; Shen, Ye; Vellaikal, Asha; Kuo, C.-C. Jay

    1998-10-01

    Many multimedia applications, such as multimedia data management systems and communication systems, require efficient representation of multimedia content. Thus semantic interpretation of video content has been a popular research area. Currently, most content-based video representation involves the segmentation of video based on key frames which are generated using scene change detection techniques as well as camera/object motion. Then, video features can be extracted from key frames. However most of such research performs off-line video processing in which the whole video scope is known as a priori which allows multiple scans of the stored video files during video processing. In comparison, relatively not much research has been done in the area of on-line video processing, which is crucial in video communication applications such as on-line collaboration, news broadcasts and so on. Our research investigates on-line real-time scene change detection of multicast video over the Internet. Our on-line processing system are designed to meet the requirements of real-time video multicasting over the Internet and to utilize the successful video parsing techniques available today. The proposed algorithms extract key frames from video bitstreams sent through the MBone network, and the extracted key frames are multicasted as annotations or metadata over a separate channel to assist in content filtering such as those anticipated to be in use by on-line filtering proxies in the Internet. The performance of the proposed algorithms are demonstrated and discussed in this paper.

  12. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  13. The canopy camera

    Science.gov (United States)

    Harry E. Brown

    1962-01-01

    The canopy camera is a device of new design that takes wide-angle, overhead photographs of vegetation canopies, cloud cover, topographic horizons, and similar subjects. Since the entire hemisphere is photographed in a single exposure, the resulting photograph is circular, with the horizon forming the perimeter and the zenith the center. Photographs of this type provide...

  14. NIR Camera/spectrograph: TEQUILA

    Science.gov (United States)

    Ruiz, E.; Sohn, E.; Cruz-Gonzalez, I.; Salas, L.; Parraga, A.; Torres, R.; Perez, M.; Cobos, F.; Tejada, C.; Iriarte, A.

    1998-11-01

    We describe the configuration and operation modes of the IR camera/spectrograph called TEQUILA, based on a 1024X1024 HgCdTe FPA (HAWAII). The optical system will allow three possible modes of operation: direct imaging, low and medium resolution spectroscopy and polarimetry. The basic system is being designed to consist of the following: 1) A LN$_2$ dewar that allocates the FPA together with the preamplifiers and a 24 filter position cylinder. 2) Control and readout electronics based on DSP modules linked to a workstation through fiber optics. 3) An optomechanical assembly cooled to -30oC that provides an efficient operation of the instrument in its various modes. 4) A control module for the moving parts of the instrument. The opto-mechanical assembly will have the necessary provisions to install a scanning Fabry-Perot interferometer and an adaptive optics correction system. The final image acquisition and control of the whole instrument is carried out in a workstation to provide the observer with a friendly environment. The system will operate at the 2.1 m telescope at the Observatorio Astronomico Nacional in San Pedro Martir, B.C. (Mexico), and is intended to be a first-light instrument for the new 7.8 m Mexican Infrared-Optical Telescope (TIM).

  15. Differential geometry measures of nonlinearity for the video tracking problem

    Science.gov (United States)

    Mallick, Mahendra; La Scala, Barbara F.

    2006-05-01

    Tracking people and vehicles in an urban environment using video cameras onboard unmanned aerial vehicles has drawn a great deal of interest in recent years due to their low cost compared with expensive radar systems. Video cameras onboard a number of small UAVs can provide inexpensive, effective, and highly flexible airborne intelligence, surveillance and reconnaissance as well as situational awareness functions. The perspective transformation is a commonly used general measurement model for the video camera when the variation in terrain height in the object scene is not negligible and the distance between the camera and the scene is not large. The perspective transformation is a nonlinear function of the object position. Most video tracking applications use a nearly constant velocity model (NCVM) of the target in the local horizontal plane. The filtering problem is nonlinear due to nonlinearity in the measurement model. In this paper, we present algorithms for quantifying the degree of nonlinearity (DoN) by calculating the differential geometry based parameter-effects curvature and intrinsic curvature measures of nonlinearity for the video tracking problem. We use the constant velocity model (CVM) of a target in 2D with simulated video measurements in the image plane. We have presented preliminary results using 200 Monte Carlo simulations and future work will focus on detailed numerical results. Our results for the chosen video tracking problem indicate that the DoN is low and therefore, we expect the extended Kalman filter to be reasonable choice.

  16. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real W...

  17. CCD Camera Detection of HIV Infection.

    Science.gov (United States)

    Day, John R

    2017-01-01

    Rapid and precise quantification of the infectivity of HIV is important for molecular virologic studies, as well as for measuring the activities of antiviral drugs and neutralizing antibodies. An indicator cell line, a CCD camera, and image-analysis software are used to quantify HIV infectivity. The cells of the P4R5 line, which express the receptors for HIV infection as well as β-galactosidase under the control of the HIV-1 long terminal repeat, are infected with HIV and then incubated 2 days later with X-gal to stain the infected cells blue. Digital images of monolayers of the infected cells are captured using a high resolution CCD video camera and a macro video zoom lens. A software program is developed to process the images and to count the blue-stained foci of infection. The described method allows for the rapid quantification of the infected cells over a wide range of viral inocula with reproducibility, accuracy and at relatively low cost.

  18. NOAA Polyline Shapefile - Drop Camera transects, US Caribbean – Virgin Passage and St. John Shelf - Project NF-03-10-USVI-HAB - (2010), UTM 20N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video was collected between...

  19. -NOAA Shapefile - Drop Camera Transects Lines, USVI 2011 , Seafloor Characterization of the US Caribbean - Nancy Foster - NF-11-1 (2011), UTM 20N NAD83 (NCEI Accession 0131858)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video was collected between...

  20. NOAA Polyline Shapefile - Drop Camera transects, US Caribbean – Virgin Passage and St. John Shelf - Project NF-03-10-USVI-HAB - (2010), UTM 20N NAD83 (NCEI Accession 0131854)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video was collected between...

  1. MISR radiometric camera-by-camera Cloud Mask V004

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the Radiometric camera-by-camera Cloud Mask dataset. It is used to determine whether a scene is classified as clear or cloudy. A new parameter has...

  2. Analysis of Camera Arrays Applicable to the Internet of Things.

    Science.gov (United States)

    Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing

    2016-03-22

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.

  3. Capping off installation

    CERN Multimedia

    2006-01-01

    Installation of the cathode strip chambers for the muon system on the CMS positive endcap has been completed. Technicians install one of the last muon system cathode strip chambers on the CMS positive endcap. Like successfully putting together the pieces of a giant puzzle, installation of the muon system cathode strip chambers on one of the CMS endcaps has been completed. Total installation of the cathode strip chambers (CSC) is now 91 percent complete; only one ring of chambers needs to be mounted on the remaining endcap to finish installation of the entire system. To guarantee a good fit for the 468 total endcap muon system CSCs, physicists and engineers from the collaboration spent about 10 years carefully planning the design. The endcap muon system's cables, boxes, pipes and other parts were designed and integrated using a 3D computerized model. 'It took a long time to do all the computer modelling, but in the long run it saved us an enormous amount of time because it meant that everything fit together,...

  4. A multiple camera tongue switch for a child with severe spastic quadriplegic cerebral palsy.

    Science.gov (United States)

    Leung, Brian; Chau, Tom

    2010-01-01

    The present study proposed a video-based access technology that facilitated a non-contact tongue protrusion access modality for a 7-year-old boy with severe spastic quadriplegic cerebral palsy (GMFCS level 5). The proposed system featured a centre camera and two peripheral cameras to extend coverage of the frontal face view of this user for longer durations. The child participated in a descriptive case study. The participant underwent 3 months of tongue protrusion training while the multiple camera tongue switch prototype was being prepared. Later, the participant was brought back for five experiment sessions where he worked on a single-switch picture matching activity, using the multiple camera tongue switch prototype in a controlled environment. The multiple camera tongue switch achieved an average sensitivity of 82% and specificity of 80%. In three of the experiment sessions, the peripheral cameras were associated with most of the true positive switch activations. These activations would have been missed by a centre-camera-only setup. The study demonstrated proof-of-concept of a non-contact tongue access modality implemented by a video-based system involving three cameras and colour video processing.

  5. Goniometer to calibrate system cameras or amateur cameras

    Science.gov (United States)

    Hakkarainen, J.

    An accurate and rapid horizontal goniometer was developed to determine the optical properties of film cameras. Radial and decentering distortion, color defects, optical resolution, and small object transmission factors are measured according to light wavelengths and symmetry. The goniometer can be used to calibrate cameras for photogrammetry, to determine the effects of remoteness on image geometry, distortion symmetry, efficiency of lens lighting film systems, to develop quality criteria for lenses, and to test camera lens and camera defects after an incident.

  6. Basic electrical installation work level 2

    CERN Document Server

    Linsley, Trevor

    2015-01-01

    Everything needed to pass the first part of the City & Guilds 2365 Diploma in Electrical InstallationsUpdated in line with the 3rd Amendment of the 17th Edition IET Wiring Regulations, this new edition covers the City & Guilds 2365-02 course. Written in an accessible style with a chapter dedicated to each unit of the syllabus, this book helps you to master each topic before moving on to the next. End of chapter revision questions enable learners to check their understanding and consolidate key concepts learnt in each chapter. With a companion website containing videos, animations, worksheets a

  7. Tracking camera control in endoscopic dacryocystorhinostomy surgery.

    Science.gov (United States)

    Wawrzynski, J R; Smith, P; Tang, L; Hoare, T; Caputo, S; Siddiqui, A A; Tsatsos, M; Saleh, G M

    2015-12-01

    Poor camera control during endoscopic dacryocystorhinostomy (EnDCR) surgery can cause inadequate visualisation of the anatomy and suboptimal surgical outcomes. This study investigates the feasibility of using computer vision tracking in EnDCR surgery as a potential formative feedback tool for the quality of endoscope control. A prospective cohort analysis was undertaken comparing junior versus senior surgeons performing routine EnDCR surgery. Computer vision tracking was applied to endoscopic video footage of the surgery: Total number of movements, camera path length in pixels and surgical time were determined for each procedure. A Mann-Whitney U-test was used to test for a significant difference between juniors and seniors (P theatre. Ten junior surgeons (100 completed procedures). Total number of movements of the endoscope per procedure. Path length of the endoscope per procedure. Twenty videos, 10 from junior surgeons and 10 from senior surgeons were analysed. Feasibility of our tracking system was demonstrated. Mean camera path lengths were significantly different at 119,329px (juniors) versus 43,697px (seniors), P ≪ 0.05. The mean number of movements was significantly different at 9134 (juniors) versus 3690 (seniors), P ≪ 0.05. These quantifiable differences demonstrate construct validity for computer vision endoscope tracking as a measure of surgical experience. Computer vision tracking is a potentially useful structured and objective feedback tool to assist trainees in improving endoscope control. It enables juniors to examine how their pattern of endoscope control differs from that of seniors, focusing in particular on sections where they are most divergent. © 2015 John Wiley & Sons Ltd.

  8. Python Introduction and Installation

    Directory of Open Access Journals (Sweden)

    William J. Turkel

    2012-07-01

    Full Text Available This first lesson in our section on dealing with Online Sources is designed to get you and your computer set up to start programming. We will focus on installing the relevant software – all free and reputable – and finally we will help you to get your toes wet with some simple programming that provides immediate results. In this opening module you will install the Python programming language, the Beautiful Soup HTML/XML parser, and a text editor. Screencaps provided here come from Komodo Edit, but you can use any text editor capable of working with Python. Here’s a list of other options: Python Editors. Once everything is installed, you will write your first programs, “Hello World” in Python and HTML.

  9. Olympic Coast National Marine Sanctuary - stil120_0602a - Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during September 2006. Video data...

  10. still116_0501n-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  11. still116_0501d-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  12. still116_0501c-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  13. still116_0501s-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  14. still114_0402c-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  15. still115_0403-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  16. still114_0402b-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  17. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  18. Electrical installations technology

    CERN Document Server

    Whitfield, J F

    1968-01-01

    Electrical Installations Technology covers the syllabus of the City and Guilds of London Institute course No. 51, the "Electricians B Certificate”. This book is composed of 15 chapters that deal with basic electrical science and electrical installations. The introductory chapters discuss the fundamentals and basic electrical principles, including the concept of mechanics, heat, magnetic fields, electric currents, power, and energy. These chapters also explore the atomic theory of electric current and the electric circuit, conductors, and insulators. The subsequent chapter focuses on the chemis

  19. Design of video interface conversion system based on FPGA

    Science.gov (United States)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  20. Benchmarking the Optical Resolving Power of Uav Based Camera Systems

    Science.gov (United States)

    Meißner, H.; Cramer, M.; Piltz, B.

    2017-08-01

    UAV based imaging and 3D object point generation is an established technology. Some of the UAV users try to address (very) highaccuracy applications, i.e. inspection or monitoring scenarios. In order to guarantee such level of detail and accuracy high resolving imaging systems are mandatory. Furthermore, image quality considerably impacts photogrammetric processing, as the tie point transfer, mandatory for forming the block geometry, fully relies on the radiometric quality of images. Thus, empirical testing of radiometric camera performance is an important issue, in addition to standard (geometric) calibration, which normally is covered primarily. Within this paper the resolving power of ten different camera/lens installations has been investigated. Selected systems represent different camera classes, like DSLRs, system cameras, larger format cameras and proprietary systems. As the systems have been tested in wellcontrolled laboratory conditions and objective quality measures have been derived, individual performance can be compared directly, thus representing a first benchmark on radiometric performance of UAV cameras. The results have shown, that not only the selection of appropriate lens and camera body has an impact, in addition the image pre-processing, i.e. the use of a specific debayering method, significantly influences the final resolving power.

  1. Managed Video as a Service for a Video Surveillance Model

    Directory of Open Access Journals (Sweden)

    Dan Benta

    2009-01-01

    Full Text Available The increasing demand for security systems hasresulted in rapid development of video surveillance and videosurveillance has turned into a major area of interest andmanagement challenge. Personal experience in specializedcompanies helped me to adapt demands of users of videosecurity systems to system performance. It is known thatpeople wish to obtain maximum profit with minimum effort,but security is not neglected. Surveillance systems and videomonitoring should provide only necessary information and torecord only when there is activity. Via IP video surveillanceservices provides more safety in this sector, being able torecord information on servers located in other locations thanthe IP cameras. Also, these systems allow real timemonitoring of goods or activities that take place in supervisedperimeters. View live and recording can be done via theInternet from any computer, using a web browser. Access tothe surveillance system is granted after a user and passwordauthentication.

  2. Marcel Odenbach. Konzept, Performance, Video, Installation 1975 – 1998

    DEFF Research Database (Denmark)

    Kacunko, Slavko

    pedagogia, tra il 1987 e il 1993 Slavko Kacunko ha studiato storia dell`arte e filosofia presso la facoltà di filosofia dell`Università die Zagabria. Tra il 1994 e il 1998 frequent infine il corso di dottorato della facoltà di filosofia della Heinrich-Heine Universität di Düsseldorf. Con nummerose...

  3. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition...

  4. Video Analytics

    DEFF Research Database (Denmark)

    include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...

  5. Streak camera techniques

    Energy Technology Data Exchange (ETDEWEB)

    Avara, R.

    1977-06-01

    An introduction to streak camera geometry, experimental techniques, and limitations are presented. Equations, graphs and charts are included to provide useful data for optimizing the associated optics to suit each experiment. A simulated analysis is performed on simultaneity and velocity measurements. An error analysis is also performed for these measurements utilizing the Monte Carlo method to simulate the distribution of uncertainties associated with simultaneity-time measurements.

  6. Visual odometry from omnidirectional camera

    OpenAIRE

    Jiří DIVIŠ

    2012-01-01

    We present a system that estimates the motion of a robot relying solely on images from onboard omnidirectional camera (visual odometry). Compared to other visual odometry hardware, ours is unusual in utilizing high resolution, low frame-rate (1 to 3 Hz) omnidirectional camera mounted on a robot that is propelled using continuous tracks. We focus on high precision estimates in scenes, where objects are far away from the camera. This is achieved by utilizing omnidirectional camera that is able ...

  7. Implementing Network Video for Traditional Security and Innovative Applications: Best Practices and Uses for Network Video in K-12 Schools

    Science.gov (United States)

    Wren, Andrew

    2008-01-01

    Administrators are constantly seeking ways to cost-effectively and adequately increase security and improve efficiency in K-12 schools. While video is not a new tool to schools, the shift from analog to network technology has increased the accessibility and usability in a variety of applications. Properly installed and used, video is a powerful…

  8. TEM Video Compressive Sensing

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-01

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental conditions

  9. Gamma ray camera

    Science.gov (United States)

    Perez-Mendez, Victor

    1997-01-01

    A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.

  10. The DRAGO gamma camera.

    Science.gov (United States)

    Fiorini, C; Gola, A; Peloso, R; Longoni, A; Lechner, P; Soltau, H; Strüder, L; Ottobrini, L; Martelli, C; Lui, R; Madaschi, L; Belloli, S

    2010-04-01

    In this work, we present the results of the experimental characterization of the DRAGO (DRift detector Array-based Gamma camera for Oncology), a detection system developed for high-spatial resolution gamma-ray imaging. This camera is based on a monolithic array of 77 silicon drift detectors (SDDs), with a total active area of 6.7 cm(2), coupled to a single 5-mm-thick CsI(Tl) scintillator crystal. The use of an array of SDDs provides a high quantum efficiency for the detection of the scintillation light together with a very low electronics noise. A very compact detection module based on the use of integrated readout circuits was developed. The performances achieved in gamma-ray imaging using this camera are reported here. When imaging a 0.2 mm collimated (57)Co source (122 keV) over different points of the active area, a spatial resolution ranging from 0.25 to 0.5 mm was measured. The depth-of-interaction capability of the detector, thanks to the use of a Maximum Likelihood reconstruction algorithm, was also investigated by imaging a collimated beam tilted to an angle of 45 degrees with respect to the scintillator surface. Finally, the imager was characterized with in vivo measurements on mice, in a real preclinical environment.

  11. Hemispherical Laue camera

    Science.gov (United States)

    Li, James C. M.; Chu, Sungnee G.

    1980-01-01

    A hemispherical Laue camera comprises a crystal sample mount for positioning a sample to be analyzed at the center of sphere of a hemispherical, X-radiation sensitive film cassette, a collimator, a stationary or rotating sample mount and a set of standard spherical projection spheres. X-radiation generated from an external source is directed through the collimator to impinge onto the single crystal sample on the stationary mount. The diffracted beam is recorded on the hemispherical X-radiation sensitive film mounted inside the hemispherical film cassette in either transmission or back-reflection geometry. The distances travelled by X-radiation diffracted from the crystal to the hemispherical film are the same for all crystal planes which satisfy Bragg's Law. The recorded diffraction spots or Laue spots on the film thereby preserve both the symmetry information of the crystal structure and the relative intensities which are directly related to the relative structure factors of the crystal orientations. The diffraction pattern on the exposed film is compared with the known diffraction pattern on one of the standard spherical projection spheres for a specific crystal structure to determine the orientation of the crystal sample. By replacing the stationary sample support with a rotating sample mount, the hemispherical Laue camera can be used for crystal structure determination in a manner previously provided in conventional Debye-Scherrer cameras.

  12. Passive imaging of wind surface flow using an infrared camera

    Science.gov (United States)

    Hagen, Nathan

    2017-12-01

    We present a method for passive imaging of wind motion against surfaces in a scene using an infrared video camera. Because the method does not require the introduction of contrast agents for visualization, it is possible to obtain real-time surface flow measurements across large areas and in natural outdoor conditions, without prior preparation of surfaces. We show that this method can be used not just for obtaining single snapshot images but also for real-time flow video, and demonstrate that it is possible to measure under a wide range of conditions.

  13. NIGERIAN HOME VIDEO MOVIES AND ENHANCED MUSIC ...

    African Journals Online (AJOL)

    Precious

    presented within the boundaries of acceptable standards and philosophic thought. Nigeria video films have been used to address a myriad of existing and emergent problems. Because of the distinctness and popularity, they represent a veritable tool for the deflation of anti- social practices and the installation of approved ...

  14. Bring your own camera to the trap: An inexpensive, versatile, and portable triggering system tested on wild hummingbirds.

    Science.gov (United States)

    Rico-Guevara, Alejandro; Mickley, James

    2017-07-01

    The study of animals in the wild offers opportunities to collect relevant information on their natural behavior and abilities to perform ecologically relevant tasks. However, it also poses challenges such as accounting for observer effects, human sensory limitations, and the time intensiveness of this type of research. To meet these challenges, field biologists have deployed camera traps to remotely record animal behavior in the wild. Despite their ubiquity in research, many commercial camera traps have limitations, and the species and behavior of interest may present unique challenges. For example, no camera traps support high-speed video recording. We present a new and inexpensive camera trap system that increases versatility by separating the camera from the triggering mechanism. Our system design can pair with virtually any camera and allows for independent positioning of a variety of sensors, all while being low-cost, lightweight, weatherproof, and energy efficient. By using our specialized trigger and customized sensor configurations, many limitations of commercial camera traps can be overcome. We use this system to study hummingbird feeding behavior using high-speed video cameras to capture fast movements and multiple sensors placed away from the camera to detect small body sizes. While designed for hummingbirds, our application can be extended to any system where specialized camera or sensor features are required, or commercial camera traps are cost-prohibitive, allowing camera trap use in more research avenues and by more researchers.

  15. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  16. SHIP CLASSIFICATION FROM MULTISPECTRAL VIDEOS

    Directory of Open Access Journals (Sweden)

    Frederique Robert-Inacio

    2012-05-01

    Full Text Available Surveillance of a seaport can be achieved by different means: radar, sonar, cameras, radio communications and so on. Such a surveillance aims, on the one hand, to manage cargo and tanker traffic, and, on the other hand, to prevent terrorist attacks in sensitive areas. In this paper an application to video-surveillance of a seaport entrance is presented, and more particularly, the different steps enabling to classify mobile shapes. This classification is based on a parameter measuring the similarity degree between the shape under study and a set of reference shapes. The classification result describes the considered mobile in terms of shape and speed.

  17. From different angles: Exploring and applying the design potential of video

    OpenAIRE

    Pasman, G.J.

    2012-01-01

    Recent developments in both hardware and software have brought video within the scope of design students as a new visual design tool. Being more and more equipped with cameras, for example in their smartphones, and video editing programs on their computers, they are increasing using video to record their research activities or present their design ideas. In design education, however, the full potential of video as a rich and contextual design medium is yet to be explored and developed. This p...

  18. Defect visualization in FRP-bonded concrete by using high speed camera and motion magnification technique

    Science.gov (United States)

    Qiu, Qiwen; Lau, Denvid

    2017-04-01

    High speed camera has the unique capacity of recording fast-moving objects. By using the video processing technique (e.g. motion magnification), the small motions recorded by the high speed camera can be visualized. Combined use of video camera and motion magnification technique is strongly encouraged to inspect the structures from a distant scene of interest, due to the commonplace availability, operational convenience, and cost-efficiency. This paper presents a non-contact method to evaluate the defect in FRP-bonded concrete structural element based on the surface motion analysis of high speed video. In this study, an instant air pressure is used to initiate the vibration of FRP-bonded concrete and cause the distinct vibration for the interfacial defects. The entire structural surface under the air pressure is recorded by a high-speed camera and the surface motion in video is amplified by motion magnification processing technique. The experimental results demonstrate that motion in the interfacial defect region can be visualized in the high-speed video with motion magnification. This validates the effectiveness of the new NDT method for defect detection in the whole composites structural member. The use of high-speed camera and motion magnification technique has the advantages of remote detection, efficient inspection, and sensitive measurement, which would be beneficial to structural health monitoring.

  19. Design and characterization of a prototype divertor viewing infrared video bolometer for NSTX-U

    Science.gov (United States)

    van Eden, G. G.; Reinke, M. L.; Peterson, B. J.; Gray, T. K.; Delgado-Aparicio, L. F.; Jaworski, M. A.; Lore, J.; Mukai, K.; Sano, R.; Pandya, S. N.; Morgan, T. W.

    2016-11-01

    The InfraRed Video Bolometer (IRVB) is a powerful tool to measure radiated power in magnetically confined plasmas due to its ability to obtain 2D images of plasma emission using a technique that is compatible with the fusion nuclear environment. A prototype IRVB has been developed and installed on NSTX-U to view the lower divertor. The IRVB is a pinhole camera which images radiation from the plasma onto a 2.5 μm thick, 9 × 7 cm2 Pt foil and monitors the resulting spatio-temporal temperature evolution using an IR camera. The power flux incident on the foil is calculated by solving the 2D+time heat diffusion equation, using the foil's calibrated thermal properties. An optimized, high frame rate IRVB, is quantitatively compared to results from a resistive bolometer on the bench using a modulated 405 nm laser beam with variable power density and square wave modulation from 0.2 Hz to 250 Hz. The design of the NSTX-U system and benchtop characterization are presented where signal-to-noise ratios are assessed using three different IR cameras: FLIR A655sc, FLIR A6751sc, and SBF-161. The sensitivity of the IRVB equipped with the SBF-161 camera is found to be high enough to measure radiation features in the NSTX-U lower divertor as estimated using SOLPS modeling. The optimized IRVB has a frame rate up to 50 Hz, high enough to distinguish radiation during edge-localized-modes (ELMs) from that between ELMs.

  20. Design and characterization of a prototype divertor viewing infrared video bolometer for NSTX-U

    Energy Technology Data Exchange (ETDEWEB)

    Eden, G. G. van; Morgan, T. W. [Dutch Institute for Fundamental Energy Research, 5612 AJ Eindhoven (Netherlands); Reinke, M. L.; Gray, T. K.; Lore, J. [Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Peterson, B. J.; Mukai, K. [National Institute for Fusion Science, Toki 509-5292 (Japan); Delgado-Aparicio, L. F.; Jaworski, M. A. [Princeton Plasma Physics Laboratory, P.O. Box 451, Princeton, New Jersey 08543 (United States); Sano, R. [National Institutes for Quantum and Radiological Science and Technology, Naka 311-0193 (Japan); Pandya, S. N. [Institute for Plasma Research, Bhat Village, Gandhinagar, 382428 Gujarat (India)

    2016-11-15

    The InfraRed Video Bolometer (IRVB) is a powerful tool to measure radiated power in magnetically confined plasmas due to its ability to obtain 2D images of plasma emission using a technique that is compatible with the fusion nuclear environment. A prototype IRVB has been developed and installed on NSTX-U to view the lower divertor. The IRVB is a pinhole camera which images radiation from the plasma onto a 2.5 μm thick, 9 × 7 cm{sup 2} Pt foil and monitors the resulting spatio-temporal temperature evolution using an IR camera. The power flux incident on the foil is calculated by solving the 2D+time heat diffusion equation, using the foil’s calibrated thermal properties. An optimized, high frame rate IRVB, is quantitatively compared to results from a resistive bolometer on the bench using a modulated 405 nm laser beam with variable power density and square wave modulation from 0.2 Hz to 250 Hz. The design of the NSTX-U system and benchtop characterization are presented where signal-to-noise ratios are assessed using three different IR cameras: FLIR A655sc, FLIR A6751sc, and SBF-161. The sensitivity of the IRVB equipped with the SBF-161 camera is found to be high enough to measure radiation features in the NSTX-U lower divertor as estimated using SOLPS modeling. The optimized IRVB has a frame rate up to 50 Hz, high enough to distinguish radiation during edge-localized-modes (ELMs) from that between ELMs.

  1. Multiple-camera tracking: UK government requirements

    Science.gov (United States)

    Hosmer, Paul

    2007-10-01

    The Imagery Library for Intelligent Detection Systems (i-LIDS) is the UK government's new standard for Video Based Detection Systems (VBDS). The standard was launched in November 2006 and evaluations against it began in July 2007. With the first four i-LIDS scenarios completed, the Home Office Scientific development Branch (HOSDB) are looking toward the future of intelligent vision in the security surveillance market by adding a fifth scenario to the standard. The fifth i-LIDS scenario will concentrate on the development, testing and evaluation of systems for the tracking of people across multiple cameras. HOSDB and the Centre for the Protection of National Infrastructure (CPNI) identified a requirement to track targets across a network of CCTV cameras using both live and post event imagery. The Detection and Vision Systems group at HOSDB were asked to determine the current state of the market and develop an in-depth Operational Requirement (OR) based on government end user requirements. Using this OR the i-LIDS team will develop a full i-LIDS scenario to aid the machine vision community in its development of multi-camera tracking systems. By defining a requirement for multi-camera tracking and building this into the i-LIDS standard the UK government will provide a widely available tool that developers can use to help them turn theory and conceptual demonstrators into front line application. This paper will briefly describe the i-LIDS project and then detail the work conducted in building the new tracking aspect of the standard.

  2. Physics Girl: Where Education meets Cat Videos

    Science.gov (United States)

    Cowern, Dianna

    YouTube is usually considered an entertainment medium to watch cats, gaming, and music videos. But educational channels have been gaining momentum on the platform, some garnering millions of subscribers and billions of views. The Physics Girl YouTube channel is an educational series with PBS Digital Studios created by Dianna Cowern. Using Physics Girl as an example, this talk will examine what it takes to start a short-form educational video series, including logistics and resources. One benefit of video is that every failure is documented on camera and can, and will, be used in this talk as a learning tool. We will look at the channels demographical reach, discuss best practices for effective physics outreach, and survey how online media and technology can facilitate good and bad learning. The aim of this talk is to show how videos are a unique way to share science and enrich the learning experience, in and out of a classroom.

  3. Implementation of multistandard video signals integrator

    Science.gov (United States)

    Zabołotny, Wojciech M.; Pastuszak, Grzegorz; Sokół, Grzegorz; Borowik, Grzegorz; GÄ ska, Michał; Kasprowicz, Grzegorz H.; Poźniak, Krzysztof T.; Abramowski, Andrzej; Buchowicz, Andrzej; Trochimiuk, Maciej; Frasunek, Przemysław; Jurkiewicz, Rafał; Nalbach-Moszynska, Małgorzata; Wawrzusiak, Radosław; Bukowiecka, Danuta; Tyburska, Agata; Struniawski, Jarosław; Jastrzebski, Paweł; Jewartowski, BłaŻej; Brawata, Sebastian; Bubak, Iwona; Gloza, Małgorzata

    2017-08-01

    The paper describes the prototype implemetantion of the Video Signals Integrator (VSI). The function of the system is to integrate video signals from many sources. The VSI is a complex hybrid system consisting of hardware, firmware and software components. Its creation requires joint effort of experts from different areas. The VSI capture device is a portable hardware device responsible for capturing of video signals from different different sources and in various formats, and for transmitting them to the server. The NVR server aggregates video and control streams coming from different sources and multiplexes them into logical channels with each channel representing a single source. From there each channel can be distributed further to the end clients (consoles) for live display via a number of RTSP servers. The end client can, at the same time, inject control messages into a given channel to control movement of a CCTV camera.

  4. Temporal compressive imaging for video

    Science.gov (United States)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  5. Distributed Video Coding for Multiview and Video-plus-depth Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo

    The interest in Distributed Video Coding (DVC) systems has grown considerably in the academic world in recent years. With DVC the correlation between frames is exploited at the decoder (joint decoding). The encoder codes the frame independently, performing relatively simple operations. Therefore......, with DVC the complexity is shifted from encoder to decoder, making the coding architecture a viable solution for encoders with limited resources. DVC may empower new applications which can benefit from this reversed coding architecture. Multiview Distributed Video Coding (M-DVC) is the application...... of the DVC principles to camera networks. Thanks to its reversed coding paradigm M-DVC enables the exploitation of inter-camera redundancy without inter-camera communication, because the frames are encoded independently. One of the key elements in DVC is the Side Information (SI) which is an estimation...

  6. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    Science.gov (United States)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  7. Make your own video with ActivePresenter

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    A step-by-step video tutorial on how to use ActivePresenter, a screen recording tool for Windows and Mac. The installation step is not needed for CERN users, as the product is already made available. This tutorial explains how to install ActivePresenter, how to do a screen recording and edit a video using ActivePresenter and finally how to exports the end product. Tell us what you think about this or any other video in this category via e-learning.support at cern.ch All info about the CERN rapid e-learning project is linked from http://twiki.cern.ch/ELearning  

  8. Effects of red light camera enforcement on red light violations in Arlington County, Virginia.

    Science.gov (United States)

    McCartt, Anne T; Hu, Wen

    2014-02-01

    In June 2010, Arlington County, Virginia, installed red light cameras at four heavily traveled signalized intersections. Effects of camera enforcement on red light violations were examined. Traffic was videotaped during the 1-month warning period and 1month and 1year after ticketing began at the four camera intersections, four non-camera "spillover" intersections in Arlington County (two on travel corridors with camera intersections, two on different corridors), and four non-camera "control" intersections in adjacent Fairfax County. Logistic regression models estimated changes in the likelihood of violations at camera and spillover intersections, relative to expected likelihood without cameras, based on changes at control intersections. At camera intersections, there were significant reductions 1year after ticketing in odds of violations occurring at least 0.5s (39%) and at least 1.5s (86%) after lights turned red, relative to expected odds without cameras, and a marginally significant 48% reduction in violations occurring at least 1s into red. At non-camera intersections on corridors with camera intersections, odds of violations occurring at least 0.5s (14%), 1s (25%), and 1.5s (63%) into the red phase declined compared with expected odds, but not significantly. Odds of violations increased at the non-camera intersections located on other Arlington County travel corridors. Consistent with prior research, red light violations at camera-enforced intersections declined significantly. Reductions were greater the longer after the light turned red, when violations are more likely to cause crashes. Spillover benefits were observed only for nearby intersections on travel corridors with cameras and were not always significant. This evaluation examined the first year of Arlington County's red light camera program, which was modest in scope and without ongoing publicity. A larger, more widely publicized program is likely needed to achieve community-wide effects. Copyright

  9. Distributed embedded smart cameras architectures, design and applications

    CERN Document Server

    Velipasalar, Senem

    2014-01-01

    This publication addresses distributed embedded smart camerascameras that perform onboard analysis and collaborate with other cameras. This book provides the material required to better understand the architectural design challenges of embedded smart camera systems, the hardware/software ecosystem, the design approach for, and applications of distributed smart cameras together with the state-of-the-art algorithms. The authors concentrate on the architecture, hardware/software design, realization of smart camera networks from applications to architectures, in particular in the embedded and mobile domains. •                    Examines energy issues related to wireless communication such as decreasing energy consumption to increase battery-life •                    Discusses processing large volumes of video data on an embedded environment in real-time •                    Covers design of realistic applications of distributed and embedded smart...

  10. Improving the Quality of Color Colonoscopy Videos

    Directory of Open Access Journals (Sweden)

    Dahyot Rozenn

    2008-01-01

    Full Text Available Abstract Colonoscopy is currently one of the best methods to detect colorectal cancer. Nowadays, one of the widely used colonoscopes has a monochrome chipset recording successively at 60 Hz and components merged into one color video stream. Misalignments of the channels occur each time the camera moves, and this artefact impedes both online visual inspection by doctors and offline computer analysis of the image data. We propose to restore this artefact by first equalizing the color channels and then performing a robust camera motion estimation and compensation.

  11. Three-dimensional camera

    Science.gov (United States)

    Bothe, Thorsten; Gesierich, Achim; Legarda-Saenz, Ricardo; Jueptner, Werner P. O.

    2003-05-01

    Industrial- and multimedia applications need cost effective, compact and flexible 3D profiling instruments. In the talk we will show the principle of, applications for and results from a new miniaturized 3-D profiling system for macroscopic scenes. The system uses a compact housing and is usable like a camera with minimum stabilization like a tripod. The system is based on common fringe projection technique. Camera and projector are assembled with parallel optical axes having coplanar projection and imaging plane. Their axes distance is comparable to the human eyes distance altogether giving a complete system of 21x20x11 cm size and allowing to measure high gradient objects like the interior of tubes. The fringe projector uses a LCD which enables fast and flexible pattern projection. Camera and projector have a short focal length and a high system aperture as well as a large depth of focus. Thus, objects can be measured from a shorter distance compared to common systems (e.g. 1 m sized objects in 80 cm distance). Actually, objects with diameters up to 4 m can be profiled because the set-up allows working with completely opened aperture combined with bright lamps giving a big amount of available light and a high Signal to Noise Ratio. Normally a small basis has the disadvantage of reduced sensitivity. We investigated in methods to compensate the reduced sensitivity via setup and enhanced evaluation methods. For measurement we use synthetic wavelengths. The developed algorithms are completely adaptable concerning the users needs of speed and accuracy. The 3D camera is built from low cost components, robust, nearly handheld and delivers insights also into difficult technical objects like tubes and inside volumes. Besides the realized high resolution phase measurement the system calibration is an important task for usability. While calibrating with common photogrammetric models (which are typically used for actual fringe projection systems) problems were found that

  12. Digital camera in ophthalmology

    Directory of Open Access Journals (Sweden)

    Ashish Mitra

    2015-01-01

    Full Text Available Ophthalmology is an expensive field and imaging is an indispensable modality in ophthalmology; and in developing countries including India, it is not possible for every ophthalmologist to afford slit-lamp photography unit. We here present our experience of slit-lamp photography using digital camera. Good quality pictures of anterior and posterior segment disorders were captured using readily available devices. It can be a used as a good teaching tool for residents learning ophthalmology and can also be a method to document lesions which at many times is necessary for medicolegal purposes. It's a technique which is simple, inexpensive, and has a short learning curve.

  13. REAL TIME SPEED ESTIMATION FROM MONOCULAR VIDEO

    Directory of Open Access Journals (Sweden)

    M. S. Temiz

    2012-07-01

    Full Text Available In this paper, detailed studies have been performed for developing a real time system to be used for surveillance of the traffic flow by using monocular video cameras to find speeds of the vehicles for secure travelling are presented. We assume that the studied road segment is planar and straight, the camera is tilted downward a bridge and the length of one line segment in the image is known. In order to estimate the speed of a moving vehicle from a video camera, rectification of video images is performed to eliminate the perspective effects and then the interest region namely the ROI is determined for tracking the vehicles. Velocity vectors of a sufficient number of reference points are identified on the image of the vehicle from each video frame. For this purpose sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in the image space are transformed to the object space to find the absolute values of these magnitudes. The accuracy of the estimated speed is approximately ±1 – 2 km/h. In order to solve the real time speed estimation problem, the authors have written a software system in C++ programming language. This software system has been used for all of the computations and test applications.

  14. Installing the ALICE detector

    CERN Multimedia

    Maximilien Brice

    2006-01-01

    The huge iron yoke in the cavern at Point 2 in the LHC tunnel is prepared for the installation of the ALICE experiment. The yoke is being reused from the previous L3 experiment that was located at the same point during the LEP project from 1989 to 2000. ALICE will be inserted piece by piece into the cradle where it will be used to study collisions between two beams of lead ions.

  15. Installation Strategic Planning Guidebook

    Science.gov (United States)

    2012-05-01

    project plan, probably in the form of a Gantt chart so that the planning team members can reference as needed. Organization schedule of events: If...Koehler Publishing, 1994 7. Strategy Safari – A Guided Tour Through the Wilds of Strategic Management by Henry Mintzberg, Bruce Ahlstrand, and... Gantt chart - Resource the pre-work - Recap - Follow-up actions Installation Strategic Planning Guidebook 94 Appendix H

  16. First ALICE detectors installed!

    CERN Multimedia

    2006-01-01

    Detectors to track down penetrating muon particles are the first to be placed in their final position in the ALICE cavern. The Alice muon spectrometer: in the foreground the trigger chamber is positioned in front of the muon wall, with the dipole magnet in the background. After the impressive transport of its dipole magnet, ALICE has begun to fill the spectrometer with detectors. In mid-July, the ALICE muon spectrometer team achieved important milestones with the installation of the trigger and the tracking chambers of the muon spectrometer. They are the first detectors to be installed in their final position in the cavern. All of the eight half planes of the RPCs (resistive plate chambers) have been installed in their final position behind the muon filter. The role of the trigger detector is to select events containing a muon pair coming, for instance, from the decay of J/ or Y resonances. The selection is made on the transverse momentum of the two individual muons. The internal parts of the RPCs, made o...

  17. Human recognition at a distance in video

    CERN Document Server

    Bhanu, Bir

    2010-01-01

    Most biometric systems employed for human recognition require physical contact with, or close proximity to, a cooperative subject. Far more challenging is the ability to reliably recognize individuals at a distance, when viewed from an arbitrary angle under real-world environmental conditions. Gait and face data are the two biometrics that can be most easily captured from a distance using a video camera. This comprehensive and logically organized text/reference addresses the fundamental problems associated with gait and face-based human recognition, from color and infrared video data that are

  18. A video rate laser scanning confocal microscope

    Science.gov (United States)

    Ma, Hongzhou; Jiang, James; Ren, Hongwu; Cable, Alex E.

    2008-02-01

    A video-rate laser scanning microscope was developed as an imaging engine to integrate with other photonic building blocks to fulfill various microscopic imaging applications. The system is quipped with diode laser source, resonant scanner, galvo scanner, control electronic and computer loaded with data acquisition boards and imaging software. Based on an open frame design, the system can be combined with varies optics to perform the functions of fluorescence confocal microscopy, multi-photon microscopy and backscattering confocal microscopy. Mounted to the camera port, it allows a traditional microscope to obtain confocal images at video rate. In this paper, we will describe the design principle and demonstrate examples of applications.

  19. Compact 3D camera

    Science.gov (United States)

    Bothe, Thorsten; Osten, Wolfgang; Gesierich, Achim; Jueptner, Werner P. O.

    2002-06-01

    A new, miniaturized fringe projection system is presented which has a size and handling that approximates to common 2D cameras. The system is based on the fringe projection technique. A miniaturized fringe projector and camera are assembled into a housing of 21x20x11 cm size with a triangulation basis of 10 cm. The advantage of the small triangulation basis is the possibility to measure difficult objects with high gradients. Normally a small basis has the disadvantage of reduced sensitivity. We investigated in methods to compensate the reduced sensitivity via setup and enhanced evaluation methods. Special hardware issues are a high quality, bright light source (and components to handle the high luminous flux) as well as adapted optics to gain a large aperture angle and a focus scan unit to increase the usable measurement volume. Adaptable synthetic wavelengths and integration times were used to increase the measurement quality and allow robust measurements that are adaptable to the desired speed and accuracy. Algorithms were developed to generate automatic focus positions to completely cover extended measurement volumes. Principles, setup, measurement examples and applications are shown.

  20. Mars Science Laboratory Engineering Cameras

    Science.gov (United States)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  1. Impact of New Camera Technologies on Discoveries in Cell Biology.

    Science.gov (United States)

    Stuurman, Nico; Vale, Ronald D

    2016-08-01

    New technologies can make previously invisible phenomena visible. Nowhere is this more obvious than in the field of light microscopy. Beginning with the observation of "animalcules" by Antonie van Leeuwenhoek, when he figured out how to achieve high magnification by shaping lenses, microscopy has advanced to this day by a continued march of discoveries driven by technical innovations. Recent advances in single-molecule-based technologies have achieved unprecedented resolution, and were the basis of the Nobel prize in Chemistry in 2014. In this article, we focus on developments in camera technologies and associated image processing that have been a major driver of technical innovations in light microscopy. We describe five types of developments in camera technology: video-based analog contrast enhancement, charge-coupled devices (CCDs), intensified sensors, electron multiplying gain, and scientific complementary metal-oxide-semiconductor cameras, which, together, have had major impacts in light microscopy. © 2016 Marine Biological Laboratory.

  2. Classifying smoke in laparoscopic videos using SVM

    Directory of Open Access Journals (Sweden)

    Alshirbaji Tamer Abdulbaki

    2017-09-01

    Full Text Available Smoke in laparoscopic videos usually appears due to the use of electrocautery when cutting or coagulating tissues. Therefore, detecting smoke can be used for event-based annotation in laparoscopic surgeries by retrieving the events associated with the electrocauterization. Furthermore, smoke detection can also be used for automatic smoke removal. However, detecting smoke in laparoscopic video is a challenge because of the changeability of smoke patterns, the moving camera and the different lighting conditions. In this paper, we present a video-based smoke detection algorithm to detect smoke of different densities such as fog, low and high density in laparoscopic videos. The proposed method depends on extracting various visual features from the laparoscopic images and providing them to support vector machine (SVM classifier. Features are based on motion, colour and texture patterns of the smoke. We validated our algorithm using experimental evaluation on four laparoscopic cholecystectomy videos. These four videos were manually annotated by defining every frame as smoke or non-smoke frame. The algorithm was applied to the videos by using different feature combinations for classification. Experimental results show that the combination of all proposed features gives the best classification performance. The overall accuracy (i.e. correctly classified frames is around 84%, with the sensitivity (i.e. correctly detected smoke frames and the specificity (i.e. correctly detected non-smoke frames are 89% and 80%, respectively.

  3. 13 point video tape quality guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Gaunt, R.

    1997-05-01

    Until high definition television (ATV) arrives, in the U.S. we must still contend with the National Television Systems Committee (NTSC) video standard (or PAL or SECAM-depending on your country). NTSC, a 40-year old standard designed for transmission of color video camera images over a small bandwidth, is not well suited for the sharp, full-color images that todays computers are capable of producing. PAL and SECAM also suffers from many of NTSC`s problems, but to varying degrees. Video professionals, when working with computer graphic (CG) images, use two monitors: a computer monitor for producing CGs and an NTSC monitor to view how a CG will look on video. More often than not, the NTSC image will differ significantly from the CG image, and outputting it to NTSC as an artist works enables the him or her to see the images as others will see it. Below are thirteen guidelines designed to increase the quality of computer graphics recorded onto video tape. Viewing your work in NTSC and attempting to follow the below tips will enable you to create higher quality videos. No video is perfect, so don`t expect to abide by every guideline every time.

  4. Fast-camera imaging on the W7-X stellarator

    Science.gov (United States)

    Ballinger, S. B.; Terry, J. L.; Baek, S. G.; Tang, K.; Grulke, O.

    2017-10-01

    Fast cameras recording in the visible range have been used to study filamentary (``blob'') edge turbulence in tokamak plasmas, revealing that emissive filaments aligned with the magnetic field can propagate perpendicular to it at speeds on the order of 1 km/s in the SOL or private flux region. The motion of these filaments has been studied in several tokamaks, including MAST, NSTX, and Alcator C-Mod. Filaments were also observed in the W7-X Stellarator using fast cameras during its initial run campaign. For W7-X's upcoming 2017-18 run campaign, we have installed a Phantom V710 fast camera with a view of the machine cross section and part of a divertor module in order to continue studying edge and divertor filaments. The view is coupled to the camera via a coherent fiber bundle. The Phantom camera is able to record at up to 400,000 frames per second and has a spatial resolution of roughly 2 cm in the view. A beam-splitter is used to share the view with a slower machine-protection camera. Stepping-motor actuators tilt the beam-splitter about two orthogonal axes, making it possible to frame user-defined sub-regions anywhere within the view. The diagnostic has been prepared to be remotely controlled via MDSplus. The MIT portion of this work is supported by US DOE award DE-SC0014251.

  5. Making Sure What You See Is What You Get: Digital Video Technology and the Preparation of Teachers of Elementary Science

    Science.gov (United States)

    Bueno de Mesquita, Paul; Dean, Ross F.; Young, Betty J.

    2010-01-01

    Advances in digital video technology create opportunities for more detailed qualitative analyses of actual teaching practice in science and other subject areas. User-friendly digital cameras and highly developed, flexible video-analysis software programs have made the tasks of video capture, editing, transcription, and subsequent data analysis…

  6. Choreographing the Frame: A Critical Investigation into How Dance for the Camera Extends the Conceptual and Artistic Boundaries of Dance

    Science.gov (United States)

    Preston, Hilary

    2006-01-01

    This essay investigates the collaboration between dance and choreographic practice and film/video medium in a contemporary context. By looking specifically at dance made for the camera and the proliferation of dance-film/video, critical issues will be explored that have surfaced in response to this burgeoning form. Presenting a view of avant-garde…

  7. Feasibility Study of Utilization of Action Camera, GoPro Hero 4, Google Glass, and Panasonic HX-A100 in Spine Surgery.

    Science.gov (United States)

    Lee, Chang Kyu; Kim, Youngjun; Lee, Nam; Kim, Byeongwoo; Kim, Doyoung; Yi, Seong

    2017-02-15

    Study for feasibility of commercially available action cameras in recording video of spine. Recent innovation of the wearable action camera with high-definition video recording enables surgeons to use camera in the operation at ease without high costs. The purpose of this study is to compare the feasibility, safety, and efficacy of commercially available action cameras in recording video of spine surgery. There are early reports of medical professionals using Google Glass throughout the hospital, Panasonic HX-A100 action camera, and GoPro. This study is the first report for spine surgery. Three commercially available cameras were tested: GoPro Hero 4 Silver, Google Glass, and Panasonic HX-A100 action camera. Typical spine surgery was selected for video recording; posterior lumbar laminectomy and fusion. Three cameras were used by one surgeon and video was recorded throughout the operation. The comparison was made on the perspective of human factor, specification, and video quality. The most convenient and lightweight device for wearing and holding throughout the long operation time was Google Glass. The image quality; all devices except Google Glass supported HD format and GoPro has unique 2.7K or 4K resolution. Quality of video resolution was best in GoPro. Field of view, GoPro can adjust point of interest, field of view according to the surgery. Narrow FOV option was the best for recording in GoPro to share the video clip. Google Glass has potentials by using application programs. Connectivity such as Wi-Fi and Bluetooth enables video streaming for audience, but only Google Glass has two-way communication feature in device. Action cameras have the potential to improve patient safety, operator comfort, and procedure efficiency in the field of spinal surgery and broadcasting a surgery with development of the device and applied program in the future. N/A.

  8. Dual camera system for acquisition of high resolution images

    Science.gov (United States)

    Papon, Jeremie A.; Broussard, Randy P.; Ives, Robert W.

    2007-02-01

    Video surveillance is ubiquitous in modern society, but surveillance cameras are severely limited in utility by their low resolution. With this in mind, we have developed a system that can autonomously take high resolution still frame images of moving objects. In order to do this, we combine a low resolution video camera and a high resolution still frame camera mounted on a pan/tilt mount. In order to determine what should be photographed (objects of interest), we employ a hierarchical method which first separates foreground from background using a temporal-based median filtering technique. We then use a feed-forward neural network classifier on the foreground regions to determine whether the regions contain the objects of interest. This is done over several frames, and a motion vector is deduced for the object. The pan/tilt mount then focuses the high resolution camera on the next predicted location of the object, and an image is acquired. All components are controlled through a single MATLAB graphical user interface (GUI). The final system we present will be able to detect multiple moving objects simultaneously, track them, and acquire high resolution images of them. Results will demonstrate performance tracking and imaging varying numbers of objects moving at different speeds.

  9. ACCURACY EVALUATION OF STEREO CAMERA SYSTEMS WITH GENERIC CAMERA MODELS

    Directory of Open Access Journals (Sweden)

    D. Rueß

    2012-07-01

    Full Text Available In the last decades the consumer and industrial market for non-projective cameras has been growing notably. This has led to the development of camera description models other than the pinhole model and their employment in mostly homogeneous camera systems. Heterogeneous camera systems (for instance, combine Fisheye and Catadioptric cameras can also be easily thought of for real applications. However, it has not been quite clear, how accurate stereo vision with these cameras and models can be. In this paper, different accuracy aspects are addressed by analytical inspection, numerical simulation as well as real image data evaluation. This analysis is generic, for any camera projection model, although only polynomial and rational projection models are used for distortion free, Catadioptric and Fisheye lenses. Note that this is different to polynomial and rational radial distortion models which have been addressed extensively in literature. For single camera analysis it turns out that point features towards the image sensor borders are significantly more accurate than in center regions of the sensor. For heterogeneous two camera systems it turns out, that reconstruction accuracy decreases significantly towards image borders as different projective distortions occur.

  10. Status of the Dark Energy Survey Camera (DECam) Project

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, Brenna L.; Abbott, Timothy M.C.; Angstadt, Robert; Annis, Jim; Antonik, Michelle, L.; Bailey, Jim; Ballester, Otger.; Bernstein, Joseph P.; Bernstein, Rebbeca; Bonati, Marco; Bremer, Gale; /Fermilab /Cerro-Tololo InterAmerican Obs. /ANL /Texas A-M /Michigan U. /Illinois U., Urbana /Ohio State U. /University Coll. London /LBNL /SLAC /IFAE

    2012-06-29

    The Dark Energy Survey Collaboration has completed construction of the Dark Energy Camera (DECam), a 3 square degree, 570 Megapixel CCD camera which will be mounted on the Blanco 4-meter telescope at CTIO. DECam will be used to perform the 5000 sq. deg. Dark Energy Survey with 30% of the telescope time over a 5 year period. During the remainder of the time, and after the survey, DECam will be available as a community instrument. All components of DECam have been shipped to Chile and post-shipping checkout finished in Jan. 2012. Installation is in progress. A summary of lessons learned and an update of the performance of DECam and the status of the DECam installation and commissioning will be presented.

  11. Status of the Dark Energy Survey Camera (DECam) project

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, Brenna L.; McLean, Ian S.; Ramsay, Suzanne K.; Abbott, Timothy M. C.; Angstadt, Robert; Takami, Hideki; Annis, Jim; Antonik, Michelle L.; Bailey, Jim; Ballester, Otger; Bernstein, Joseph P.; Bernstein, Rebecca A.; Bonati, Marco; Bremer, Gale; Briones, Jorge; Brooks, David; Buckley-Geer, Elizabeth J.; Campa, Juila; Cardiel-Sas, Laia; Castander, Francisco; Castilla, Javier; Cease, Herman; Chappa, Steve; Chi, Edward C.; da Costa, Luis; DePoy, Darren L.; Derylo, Gregory; de Vincente, Juan; Diehl, H. Thomas; Doel, Peter; Estrada, Juan; Eiting, Jacob; Elliott, Anne E.; Finley, David A.; Flores, Rolando; Frieman, Josh; Gaztanaga, Enrique; Gerdes, David; Gladders, Mike; Guarino, V.; Gutierrez, G.; Grudzinski, Jim; Hanlon, Bill; Hao, Jiangang; Holland, Steve; Honscheid, Klaus; Huffman, Dave; Jackson, Cheryl; Jonas, Michelle; Karliner, Inga; Kau, Daekwang; Kent, Steve; Kozlovsky, Mark; Krempetz, Kurt; Krider, John; Kubik, Donna; Kuehn, Kyler; Kuhlmann, Steve E.; Kuk, Kevin; Lahav, Ofer; Langellier, Nick; Lathrop, Andrew; Lewis, Peter M.; Lin, Huan; Lorenzon, Wolfgang; Martinez, Gustavo; McKay, Timothy; Merritt, Wyatt; Meyer, Mark; Miquel, Ramon; Morgan, Jim; Moore, Peter; Moore, Todd; Neilsen, Eric; Nord, Brian; Ogando, Ricardo; Olson, Jamieson; Patton, Kenneth; Peoples, John; Plazas, Andres; Qian, Tao; Roe, Natalie; Roodman, Aaron; Rossetto, B.; Sanchez, E.; Soares-Santos, Marcelle; Scarpine, Vic; Schalk, Terry; Schindler, Rafe; Schmidt, Ricardo; Schmitt, Richard; Schubnell, Mike; Schultz, Kenneth; Selen, M.; Serrano, Santiago; Shaw, Terri; Simaitis, Vaidas; Slaughter, Jean; Smith, R. Christopher; Spinka, Hal; Stefanik, Andy; Stuermer, Walter; Sypniewski, Adam; Talaga, R.; Tarle, Greg; Thaler, Jon; Tucker, Doug; Walker, Alistair R.; Weaverdyck, Curtis; Wester, William; Woods, Robert J.; Worswick, Sue; Zhao, Allen

    2012-09-24

    The Dark Energy Survey Collaboration has completed construction of the Dark Energy Camera (DECam), a 3 square degree, 570 Megapixel CCD camera which will be mounted on the Blanco 4-meter telescope at CTIO. DECam will be used to perform the 5000 sq. deg. Dark Energy Survey with 30% of the telescope time over a 5 year period. During the remainder of the time, and after the survey, DECam will be available as a community instrument. All components of DECam have been shipped to Chile and post-shipping checkout finished in Jan. 2012. Installation is in progress. A summary of lessons learned and an update of the performance of DECam and the status of the DECam installation and commissioning will be presented.

  12. Gyroscope and visual fusion solution for digital video stabilization

    Science.gov (United States)

    Wei, Shanshan; He, Zhiqiang; Xie, Wei

    2016-09-01

    A gyroscope and visual fusion solution for digital video stabilization (DVS) is presented. The solution classifies DVS-related motions into three types: the object motion (OM) in the world space, the camera motion in the camera space (CS), and the pixel motion in the image space (IS). The camera rotation is estimated by gyroscope and smoothed in the CS, while the camera translation is compounded with the OM and smoothed together in the IS. The main contributions of this paper lie in two aspects: (1) propose an inertial and visual fusion method to stabilize both rotational and translational jitters and (2) the fusion method is simple and fast in computation and can be suitable for smart terminals. Experimental results show that the proposed solution performs well in video stabilization.

  13. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition

    Directory of Open Access Journals (Sweden)

    Sumeet Khanduja

    2018-01-01

    Full Text Available Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a anterior segment surgery, (b surgery under direct viewing system, and (c surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.

  14. Disembodied perspective: third-person images in GoPro videos

    National Research Council Canada - National Science Library

    Bédard, Philippe

    2015-01-01

    A technical analysis of GoPro videos, focusing on the production of a third-person perspective created when the camera is turned back on the user, and the sense of disorientation that results for the spectator...

  15. An Automatic Video Meteor Observation Using UFO Capture at the Showa Station

    Science.gov (United States)

    Fujiwara, Y.; Nakamura, T.; Ejiri, M.; Suzuki, H.

    2012-05-01

    The goal of our study is to clarify meteor activities in the southern hemi-sphere by continuous optical observations with video cameras with automatic meteor detection and recording at Syowa station, Antarctica.

  16. GPM GROUND VALIDATION TWO-DIMENSIONAL VIDEO DISDROMETER (2DVD) NSSTC V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The Two-dimensional Video Disdrometer (2DVD) uses two high speed line scan cameras which provide continuous measurements of size distribution, shape and fall...

  17. CARVE: In-flight Videos from the CARVE Aircraft, Alaska, 2012-2015

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset contains videos captured by a camera mounted on the CARVE aircraft during airborne campaigns over the Alaskan and Canadian Arctic for the Carbon in...

  18. Photorealistic image synthesis and camera validation from 2D images

    Science.gov (United States)

    Santos Ferrer, Juan C.; González Chévere, David; Manian, Vidya

    2014-06-01

    This paper presents a new 3D scene reconstruction technique using the Unity 3D game engine. The method presented here allow us to reconstruct the shape of simple objects and more complex ones from multiple 2D images, including infrared and digital images from indoor scenes and only digital images from outdoor scenes and then add the reconstructed object to the simulated scene created in Unity 3D, these scenes are then validated with real world scenes. The method used different cameras settings and explores different properties in the reconstructions of the scenes including light, color, texture, shapes and different views. To achieve the highest possible resolution, it was necessary the extraction of partial textures from visible surfaces. To recover the 3D shapes and the depth of simple objects that can be represented by the geometric bodies, there geometric characteristics were used. To estimate the depth of more complex objects the triangulation method was used, for this the intrinsic and extrinsic parameters were calculated using geometric camera calibration. To implement the methods mentioned above the Matlab tool was used. The technique presented here also let's us to simulate small simple videos, by reconstructing a sequence of multiple scenes of the video separated by small margins of time. To measure the quality of the reconstructed images and video scenes the Fast Low Band Model (FLBM) metric from the Video Quality Measurement (VQM) software was used. Low bandwidth perception based features include edges and motion.

  19. Virtual displays for 360-degree video

    Science.gov (United States)

    Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.

    2012-03-01

    In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.

  20. Microminiature thermocouple monitors own installation

    Science.gov (United States)

    Garrett, A. J.; Sellers, J. P., Jr.

    1966-01-01

    Microminiature thermocouple makes precision gas sidewall temperature readings inside large thrust chambers. It is installed by a technique whereby the sensor monitors its own installation to insure against thermal damage to the thermocouple and ensure minimum disturbance to chamber surfaces.

  1. Humanizing the Installation of Microcomputers.

    Science.gov (United States)

    Ansfield, Paul J.

    1982-01-01

    This discussion of the installation of microcomputing tools in organizations and institutions describes the effects of change, precautions for management, procedures to support inexperienced employees, and directives for microcomputer installation. Nine references are listed. (EJS)

  2. Performance of buried pipe installation.

    Science.gov (United States)

    2010-05-01

    The purpose of this study is to determine the effects of geometric and mechanical parameters : characterizing the soil structure interaction developed in a buried pipe installation located under : roads/highways. The drainage pipes or culverts instal...

  3. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    Science.gov (United States)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between

  4. STRUCTURE-FROM-MOTION FOR CALIBRATION OF A VEHICLE CAMERA SYSTEM WITH NON-OVERLAPPING FIELDS-OF-VIEW IN AN URBAN ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    A. Hanel

    2017-05-01

    Full Text Available Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle

  5. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  6. Playing with the Camera - Creating with Each Other

    DEFF Research Database (Denmark)

    Vestergaard, Vitus

    2015-01-01

    Many contemporary museums try to involve users as active participants in a range of new ways. One way to engage young people with visual culture is through exhibits where users produce their own videos. Since museum experiences are largely social in nature and based on the group as a social unit......, it is imperative to investigate how museum users in a group create videos and engage with each other and the exhibits. Based on research on young users creating videos in the Media Mixer, this article explores what happens during the creative process in front of a camera. Drawing upon theories of museology, media......, learning and creativity, the article discusses how to operationalize and make sense of seemingly chaotic or banal production processes in a museum....

  7. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    Science.gov (United States)

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  8. Three-dimensional video presentation of microsurgery by the cross-eyed viewing method using a high-definition video system.

    Science.gov (United States)

    Terakawa, Yuzo; Ishibashi, Kenichi; Goto, Takeo; Ohata, Kenji

    2011-01-01

    Three-dimensional (3-D) video recording of microsurgery is a more promising tool for presentation and education of microsurgery than conventional two-dimensional video systems, but has not been widely adopted partly because 3-D image processing of previous 3-D video systems is complicated and observers without optical devices cannot visualize the 3-D image. A new technical development for 3-D video presentation of microsurgery is described. Microsurgery is recorded with a microscope equipped with a single high-definition (HD) video camera. This 3-D video system records the right- and left-eye views of the microscope simultaneously as single HD data with the use of a 3-D camera adapter: the right- and left-eye views of the microscope are displayed separately on the right and left sides, respectively. The operation video is then edited with video editing software so that the right-eye view is displayed on the left side and left-eye view is displayed on the right side. Consequently, a 3-D video of microsurgery can be created by viewing the edited video by the cross-eyed stereogram viewing method without optical devices. The 3-D microsurgical video provides a more accurate view, especially with regard to depth, and a better understanding of microsurgical anatomy. Although several issues are yet to be addressed, this 3-D video system is a useful method of recording and presenting microsurgery for 3-D viewing with currently available equipment, without optical devices.

  9. Samus Toroid Installation Fixture

    Energy Technology Data Exchange (ETDEWEB)

    Stredde, H.; /Fermilab

    1990-06-27

    The SAMUS (Small Angle Muon System) toroids have been designed and fabricated in the USSR and delivered to D0 ready for installation into the D0 detector. These toroids will be installed into the aperture of the EF's (End Toroids). The aperture in the EF's is 72-inch vertically and 66-inch horizontally. The Samus toroid is 70-inch vertically by 64-inch horizontally by 66-inch long and weighs approximately 38 tons. The Samus toroid has a 20-inch by 20-inch aperture in the center and it is through this aperture that the lift fixture must fit. The toroid must be 'threaded' through the EF aperture. Further, the Samus toroid coils are wound about the vertical portion of the aperture and thus limit the area where a lift fixture can make contact and not damage the coils. The fixture is designed to lift along a surface adjacent to the coils, but with clearance to the coil and with contact to the upper steel block of the toroid. The lift and installation will be done with the 50 ton crane at DO. The fixture was tested by lifting the Samus Toroid 2-inch off the floor and holding the weight for 10 minutes. Deflection was as predicted by the design calculations. Enclosed are sketches of the fixture and it relation to both Toroids (Samus and EF), along with hand calculations and an Finite Element Analysis. The PEA work was done by Kay Weber of the Accelerator Engineering Department.

  10. Efficient height measurement method of surveillance camera image.

    Science.gov (United States)

    Lee, Joong; Lee, Eung-Dae; Tark, Hyun-Oh; Hwang, Jin-Woo; Yoon, Do-Young

    2008-05-02

    As surveillance cameras are increasingly installed, their films are often submitted as evidence of crime, but very scant detailed information such as features and clothes is obtained due to the limited camera performance. Height, however, is relatively not significantly influenced by the camera performance. This paper studied the height measurement method using images from a CCTV. The information on the height was obtained via photogrammetry, including the reference points in the photographed area and the calculation of the relationship between a 3D space and a 2D image through linear and nonlinear calibration. Using this correlation, this paper suggested the height measurement method, which projects a 3D virtual ruler onto the image. This method has been proven to offer more stable values within the range of data convergence than those of other existing methods.

  11. Real-Time Acquisition and Display of Data and Video

    Science.gov (United States)

    Bachnak, Rafic; Chakinarapu, Ramya; Garcia, Mario; Kar, Dulal; Nguyen, Tien

    2007-01-01

    This paper describes the development of a prototype that takes in an analog National Television System Committee (NTSC) video signal generated by a video camera and data acquired by a microcontroller and display them in real-time on a digital panel. An 8051 microcontroller is used to acquire power dissipation by the display panel, room temperature, and camera zoom level. The paper describes the major hardware components and shows how they are interfaced into a functional prototype. Test data results are presented and discussed.

  12. Junocam: Juno's Outreach Camera

    Science.gov (United States)

    Hansen, C. J.; Caplinger, M. A.; Ingersoll, A.; Ravine, M. A.; Jensen, E.; Bolton, S.; Orton, G.

    2017-11-01

    Junocam is a wide-angle camera designed to capture the unique polar perspective of Jupiter offered by Juno's polar orbit. Junocam's four-color images include the best spatial resolution ever acquired of Jupiter's cloudtops. Junocam will look for convective clouds and lightning in thunderstorms and derive the heights of the clouds. Junocam will support Juno's radiometer experiment by identifying any unusual atmospheric conditions such as hotspots. Junocam is on the spacecraft explicitly to reach out to the public and share the excitement of space exploration. The public is an essential part of our virtual team: amateur astronomers will supply ground-based images for use in planning, the public will weigh in on which images to acquire, and the amateur image processing community will help process the data.

  13. 4D ANIMATION RECONSTRUCTION FROM MULTI-CAMERA COORDINATES TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    J. P. Jhan

    2016-06-01

    Full Text Available Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australis© coded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  14. Compressive Video Acquisition, Fusion and Processing

    Science.gov (United States)

    2010-12-14

    that we can explore in detail exploits the fact that even though each φm is testing a different 2D image slice, the image slices are often related...space-time cube. We related temporal bandwidth to the spacial resolution of the camera and the speed of objects in the scene. We applied our findings to...performed directly on the compressive measurements without requiring a potentially expensive video reconstruction. Accomplishments In our work exploring

  15. High dynamic range (HDR) virtual bronchoscopy rendering for video tracking

    Science.gov (United States)

    Popa, Teo; Choi, Jae

    2007-03-01

    In this paper, we present the design and implementation of a new rendering method based on high dynamic range (HDR) lighting and exposure control. This rendering method is applied to create video images for a 3D virtual bronchoscopy system. One of the main optical parameters of a bronchoscope's camera is the sensor exposure. The exposure adjustment is needed since the dynamic range of most digital video cameras is narrower than the high dynamic range of real scenes. The dynamic range of a camera is defined as the ratio of the brightest point of an image to the darkest point of the same image where details are present. In a video camera exposure is controlled by shutter speed and the lens aperture. To create the virtual bronchoscopic images, we first rendered a raw image in absolute units (luminance); then, we simulated exposure by mapping the computed values to the values appropriate for video-acquired images using a tone mapping operator. We generated several images with HDR and others with low dynamic range (LDR), and then compared their quality by applying them to a 2D/3D video-based tracking system. We conclude that images with HDR are closer to real bronchoscopy images than those with LDR, and thus, that HDR lighting can improve the accuracy of image-based tracking.

  16. Monitoring wild animal communities with arrays of motion sensitive camera traps

    OpenAIRE

    Kays, Roland; Tilak, Sameer; Kranstauber, Bart; Jansen, Patrick A.; Carbone, Chris; Rowcliffe, Marcus J.; Fountain, Tony; Eggert, Jay; He, Zhihai

    2010-01-01

    Studying animal movement and distribution is of critical importance to addressing environmental challenges including invasive species, infectious diseases, climate and land-use change. Motion sensitive camera traps offer a visual sensor to record the presence of a broad range of species providing location -specific information on movement and behavior. Modern digital camera traps that record video present new analytical opportunities, but also new data management challenges. This paper descri...

  17. The Use of Smart Glasses for Surgical Video Streaming.

    Science.gov (United States)

    Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu

    2017-04-01

    Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.

  18. CNGS Reflector installed

    CERN Multimedia

    2006-01-01

    A major component that will help target the CNGS neutrino beam for its 732km journey through the earth's crust, from CERN to the Gran Sasso laboratory in Italy, has been installed in its final position. The transport of the huge magnetic horn reflector through the CNGS access gallery. A team from CNGS and TS/IC, and the contractors DBS, transported the magnetic horn reflector on 5th December, in a carefully conducted operation that took just under two hours. The reflector is 7m long, 1.6m in diameter and 1.6 tonnes in weight. With only a matter of centimetres to spare on either side, the reflector was transported through the CNGS access gallery, before being installed in the experiment's target chamber. The larger of two magnetic horns, the reflector will help refocus sprays of high energy pions and kaons emitted after a 0.5MW stream of protons from the Super Proton Synchrotron (SPS) strikes nucleons in a graphite target. The horns are toroidal magnetic lenses and work with high pulsed currents: 150 kA f...

  19. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... NEI YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration ... Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: ...

  20. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...

  1. NEI You Tube Videos: Amblyopia

    Science.gov (United States)

    ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...

  2. Smart sensing surveillance video system

    Science.gov (United States)

    Hsu, Charles; Szu, Harold

    2016-05-01

    An intelligent video surveillance system is able to detect and identify abnormal and alarming situations by analyzing object movement. The Smart Sensing Surveillance Video (S3V) System is proposed to minimize video processing and transmission, thus allowing a fixed number of cameras to be connected on the system, and making it suitable for its applications in remote battlefield, tactical, and civilian applications including border surveillance, special force operations, airfield protection, perimeter and building protection, and etc. The S3V System would be more effective if equipped with visual understanding capabilities to detect, analyze, and recognize objects, track motions, and predict intentions. In addition, alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. The S3V System capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. It would be directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.

  3. Blood Pulsation Intensity Video Mapping

    CERN Document Server

    Borges, Pedro Henrique de M

    2016-01-01

    In this study, we make non-invasive, remote, passive measurements of the heart beat frequency and determine the map of blood pulsation intensity in a region of interest (ROI) of skin. The ROI used was the forearm of a volunteer. The method employs a regular video camera and visible light, and the video acquisition takes less than 1 minute. The mean cardiac frequency found in our volunteer was within 1 bpm of the ground-truth value simultaneously obtained via earlobe plethysmography. Using the signals extracted from the video images, we have determined an intensity map for the blood pulsation at the surface of the skin. In this paper we present the experimental and data processing details of the work and well as limitations of the technique. ----------------------------------------- Neste estudo medimos a frequ\\^encia card\\'iaca de forma n\\~ao invasiva, remota e passiva e determinamos o mapa da atividade de pulsa\\c{c}\\~ao sangu\\'inea numa regi\\~ao de interesse (ROI) da pele. A ROI utilizada foi o antebra\\c{c}o...

  4. Decommissioning of offshore installations

    Energy Technology Data Exchange (ETDEWEB)

    Oeen, Sigrun; Iversen, Per Erik; Stokke, Reidunn; Nielsen, Frantz; Henriksen, Thor; Natvig, Henning; Dretvik, Oeystein; Martinsen, Finn; Bakke, Gunnstein

    2010-07-01

    New legislation on the handling and storage of radioactive substances came into force 1 January 2011. This version of the report is updated to reflect this new regulation and will therefore in some chapters differ from the Norwegian version (see NEI-NO--1660). The Ministry of the Environment commissioned the Climate and Pollution Agency to examine the environmental impacts associated with the decommissioning of offshore installations (demolition and recycling). This has involved an assessment of the volumes and types of waste material and of decommissioning capacity in Norway now and in the future. This report also presents proposals for measures and instruments to address environmental and other concerns that arise in connection with the decommissioning of offshore installations. At present, Norway has four decommissioning facilities for offshore installations, three of which are currently involved in decommissioning projects. Waste treatment plants of this kind are required to hold permits under the Pollution Control Act. The permit system allows the pollution control authority to tailor the requirements in a specific permit by evaluating conditions and limits for releases of pollutants on a case-to-case basis, and the Act also provides for requirements to be tightened up in line with the development of best available techniques (BAT). The environmental risks posed by decommissioning facilities are much the same as those from process industries and other waste treatment plants that are regulated by means of individual permits. Strict requirements are intended to ensure that environmental and health concerns are taken into account. The review of the four Norwegian decommissioning facilities in connection with this report shows that the degree to which requirements need to be tightened up varies from one facility to another. The permit for the Vats yard is newest and contains the strictest conditions. The Climate and Pollution Agency recommends a number of measures

  5. OPERA goes on camera

    CERN Multimedia

    2007-01-01

    OPERA, the experiment which uses the neutrino beam of CERN’s CNGS facility, has delivered its first neutrino "photos". The core of the detector has been commissioned and has produced images of events resulting from neutrino collisions. The reconstruction of the core (a few cubic millimetres!) of a neutrino interaction at OPERA. The neutrino arriving from the left of the image has interacted with the lead of a brick, producing various particles identifiable by their tracks visible in the emulsion.The snapshot is tiny but it was greeted with enthusiasm by the physicists of OPERA. On 2 October, for the first time, the experiment at the Gran Sasso Laboratory in Italy "photographed" an event produced by the beam of neutrinos sent from CERN, 732 kilometres away. One of the 60,000 photosensitive bricks already installed at the heart of the experiment had produced its first particle track. The commissioning of the OPERA experiment began la...

  6. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed ... Activity Role of Body Weight in Osteoarthritis Educational Videos for Patients Rheumatoid Arthritis Educational Video Series Psoriatic ...

  7. Video Conference System that Keeps Mutual Eye Contact Among Participants

    Directory of Open Access Journals (Sweden)

    Masahiko Yahagi

    2011-10-01

    Full Text Available A novel video conference system is developed. Suppose that three people A, B, and C attend the video conference, the proposed system enables eye contact among every pair. Furthermore, when B and C chat, A feels as if B and C were facing each other (eye contact seems to be kept among B and C. In the case of a triangle video conference, the respective video system is composed of a half mirror, two video cameras, and two monitors. Each participant watches other participants' images that are reflected by the half mirror. Cameras are set behind the half mirror. Since participants' image (face and the camera position are adjusted to be the same direction, eye contact is kept and conversation becomes very natural compared with conventional video conference systems where participants' eyes do not point to the other participant. When 3 participants sit at the vertex of an equilateral triangle, eyes can be kept even for the situation mentioned above (eye contact between B and C from the aspect of A. Eye contact can be kept not only for 2 or 3 participants but also any number of participants as far as they sit at the vertex of a regular polygon.

  8. Multimodal Semantics Extraction from User-Generated Videos

    Directory of Open Access Journals (Sweden)

    Francesco Cricri

    2012-01-01

    Full Text Available User-generated video content has grown tremendously fast to the point of outpacing professional content creation. In this work we develop methods that analyze contextual information of multiple user-generated videos in order to obtain semantic information about public happenings (e.g., sport and live music events being recorded in these videos. One of the key contributions of this work is a joint utilization of different data modalities, including such captured by auxiliary sensors during the video recording performed by each user. In particular, we analyze GPS data, magnetometer data, accelerometer data, video- and audio-content data. We use these data modalities to infer information about the event being recorded, in terms of layout (e.g., stadium, genre, indoor versus outdoor scene, and the main area of interest of the event. Furthermore we propose a method that automatically identifies the optimal set of cameras to be used in a multicamera video production. Finally, we detect the camera users which fall within the field of view of other cameras recording at the same public happening. We show that the proposed multimodal analysis methods perform well on various recordings obtained in real sport events and live music performances.

  9. Video Altimeter and Obstruction Detector for an Aircraft

    Science.gov (United States)

    Delgado, Frank J.; Abernathy, Michael F.; White, Janis; Dolson, William R.

    2013-01-01

    Video-based altimetric and obstruction detection systems for aircraft have been partially developed. The hardware of a system of this type includes a downward-looking video camera, a video digitizer, a Global Positioning System receiver or other means of measuring the aircraft velocity relative to the ground, a gyroscope based or other attitude-determination subsystem, and a computer running altimetric and/or obstruction-detection software. From the digitized video data, the altimetric software computes the pixel velocity in an appropriate part of the video image and the corresponding angular relative motion of the ground within the field of view of the camera. Then by use of trigonometric relationships among the aircraft velocity, the attitude of the camera, the angular relative motion, and the altitude, the software computes the altitude. The obstruction-detection software performs somewhat similar calculations as part of a larger task in which it uses the pixel velocity data from the entire video image to compute a depth map, which can be correlated with a terrain map, showing locations of potential obstructions. The depth map can be used as real-time hazard display and/or to update an obstruction database.

  10. Decentralized tracking of humans using a camera network

    Science.gov (United States)

    Gruenwedel, Sebastian; Jelaca, Vedran; Niño-Castañeda, Jorge Oswaldo; Van Hese, Peter; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2012-01-01

    Real-time tracking of people has many applications in computer vision and typically requires multiple cameras; for instance for surveillance, domotics, elderly-care and video conferencing. However, this problem is very challenging because of the need to deal with frequent occlusions and environmental changes. Another challenge is to develop solutions which scale well with the size of the camera network. Such solutions need to carefully restrict overall communication in the network and often involve distributed processing. In this paper we present a distributed person tracker, addressing the aforementioned issues. Real-time processing is achieved by distributing tasks between the cameras and a fusion node. The latter fuses only high level data based on low-bandwidth input streams from the cameras. This is achieved by performing tracking first on the image plane of each camera followed by sending only metadata to a local fusion node. We designed the proposed system with respect to a low communication load and towards robustness of the system. We evaluate the performance of the tracker in meeting scenarios where persons are often occluded by other persons and/or furniture. We present experimental results which show that our tracking approach is accurate even in cases of severe occlusions in some of the views.

  11. CCD camera for an autoguider

    Science.gov (United States)

    Schempp, William V.

    1991-06-01

    The requirements of a charge coupled device (CCD) autoguider camera and the specifications of a camera that we propose to build to meet those requirements will be discussed. The design goals of both the package and the electronics will be considered.

  12. 61214++++','DOAJ-ART-EN'); return false;" href="+++++https://jual.nipissingu.ca/wp-content/uploads/sites/25/2014/06/v61214.m4v">61214++++">Jailed - Video

    Directory of Open Access Journals (Sweden)

    Cameron CULBERT

    2012-07-01

    Full Text Available As the public education system in Northern Ontario continues to take a downward spiral, a plethora of secondary school students are being placed in an alternative educational environment. Juxtaposing the two educational settings reveals very similar methods and characteristics of educating our youth as opposed to using a truly alternative approach to education. This video reviews the relationship between public education and alternative education in a remote Northern Ontario setting. It is my belief that the traditional methods of teaching are not appropriate in educating at risk students in alternative schools. Paper and pencil worksheets do not motivate these students to learn and succeed. Alternative education should emphasize experiential learning, a just in time curriculum based on every unique individual and the students true passion for everyday life. Cameron Culbert was born on February 3rd, 1977 in North Bay, Ontario. His teenage years were split between attending public school and his willed curriculum on the ski hill. Culbert spent 10 years (1996-2002 & 2006-2010 competing for Canada as an alpine ski racer. His passion for teaching and coaching began as an athlete and has now transferred into the classroom and the community. As a graduate of Nipissing University (BA, BEd, MEd. Camerons research interests are alternative education, physical education and technology in the classroom. Currently Cameron is an active educator and coach in Northern Ontario.

  13. Digital camera self-calibration

    Science.gov (United States)

    Fraser, Clive S.

    Over the 25 years since the introduction of analytical camera self-calibration there has been a revolution in close-range photogrammetric image acquisition systems. High-resolution, large-area 'digital' CCD sensors have all but replaced film cameras. Throughout the period of this transition, self-calibration models have remained essentially unchanged. This paper reviews the application of analytical self-calibration to digital cameras. Computer vision perspectives are touched upon, the quality of self-calibration is discussed, and an overview is given of each of the four main sources of departures from collinearity in CCD cameras. Practical issues are also addressed and experimental results are used to highlight important characteristics of digital camera self-calibration.

  14. Video Design Games

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte; Christensen, Kasper Skov; Iversen, Ole Sejer

    We introduce Video Design Games to train educators in teaching design. The Video Design Game is a workshop format consisting of three rounds in which participants observe, reflect and generalize based on video snippets from their own practice. The paper reports on a Video Design Game workshop...

  15. Enhanced Video Surveillance (EVS) with speckle imaging

    Energy Technology Data Exchange (ETDEWEB)

    Carrano, C J

    2004-01-13

    Enhanced Video Surveillance (EVS) with Speckle Imaging is a high-resolution imaging system that substantially improves resolution and contrast in images acquired over long distances. This technology will increase image resolution up to an order of magnitude or greater for video surveillance systems. The system's hardware components are all commercially available and consist of a telescope or large-aperture lens assembly, a high-performance digital camera, and a personal computer. The system's software, developed at LLNL, extends standard speckle-image-processing methods (used in the astronomical community) to solve the atmospheric blurring problem associated with imaging over medium to long distances (hundreds of meters to tens of kilometers) through horizontal or slant-path turbulence. This novel imaging technology will not only enhance national security but also will benefit law enforcement, security contractors, and any private or public entity that uses video surveillance to protect their assets.

  16. Flight State Information Inference with Application to Helicopter Cockpit Video Data Analysis Using Data Mining Techniques

    Science.gov (United States)

    Shin, Sanghyun

    The National Transportation Safety Board (NTSB) has recently emphasized the importance of analyzing flight data as one of the most effective methods to improve eciency and safety of helicopter operations. By analyzing flight data with Flight Data Monitoring (FDM) programs, the safety and performance of helicopter operations can be evaluated and improved. In spite of the NTSB's effort, the safety of helicopter operations has not improved at the same rate as the safety of worldwide airlines, and the accident rate of helicopters continues to be much higher than that of fixed-wing aircraft. One of the main reasons is that the participation rates of the rotorcraft industry in the FDM programs are low due to the high costs of the Flight Data Recorder (FDR), the need of a special readout device to decode the FDR, anxiety of punitive action, etc. Since a video camera is easily installed, accessible, and inexpensively maintained, cockpit video data could complement the FDR in the presence of the FDR or possibly replace the role of the FDR in the absence of the FDR. Cockpit video data is composed of image and audio data: image data contains outside views through cockpit windows and activities on the flight instrument panels, whereas audio data contains sounds of the alarms within the cockpit. The goal of this research is to develop, test, and demonstrate a cockpit video data analysis algorithm based on data mining and signal processing techniques that can help better understand situations in the cockpit and the state of a helicopter by efficiently and accurately inferring the useful flight information from cockpit video data. Image processing algorithms based on data mining techniques are proposed to estimate a helicopter's attitude such as the bank and pitch angles, identify indicators from a flight instrument panel, and read the gauges and the numbers in the analogue gauge indicators and digital displays from cockpit image data. In addition, an audio processing algorithm

  17. Characterization of social video

    Science.gov (United States)

    Ostrowski, Jeffrey R.; Sarhan, Nabil J.

    2009-01-01

    The popularity of social media has grown dramatically over the World Wide Web. In this paper, we analyze the video popularity distribution of well-known social video websites (YouTube, Google Video, and the AOL Truveo Video Search engine) and characterize their workload. We identify trends in the categories, lengths, and formats of those videos, as well as characterize the evolution of those videos over time. We further provide an extensive analysis and comparison of video content amongst the main regions of the world.

  18. SFDT-1 Camera Pointing and Sun-Exposure Analysis and Flight Performance

    Science.gov (United States)

    White, Joseph; Dutta, Soumyo; Striepe, Scott

    2015-01-01

    The Supersonic Flight Dynamics Test (SFDT) vehicle was developed to advance and test technologies of NASA's Low Density Supersonic Decelerator (LDSD) Technology Demonstration Mission. The first flight test (SFDT-1) occurred on June 28, 2014. In order to optimize the usefulness of the camera data, analysis was performed to optimize parachute visibility in the camera field of view during deployment and inflation and to determine the probability of sun-exposure issues with the cameras given the vehicle heading and launch time. This paper documents the analysis, results and comparison with flight video of SFDT-1.

  19. GPU-based View Interpolation for Smooth Camera Transitions in Soccer

    OpenAIRE

    GOORTS, Patrik; ROGMANS, Sammy; Bekaert, Philippe

    2013-01-01

    We present a system, capable of synthesizing free viewpoint video for smooth camera transitions in soccer scenes. The broadcaster can choose any camera viewpoint between the real, fixed cameras. This way, action can be followed across the field in a smooth manner, a frozen image or a replay can be viewed from multiple angles, and the broadcasted image can be transitioned from one to the other side of the field in a smooth manner to avoid orientation-related confusion of the viewers. We use a ...

  20. Development of SED Camera for Quasars in Early Universe (SQUEAN)

    Science.gov (United States)

    Kim, Sanghyuk; Jeon, Yiseul; Lee, Hye-In; Park, Woojin; Ji, Tae-Geun; Hyun, Minhee; Choi, Changsu; Im, Myungshin; Pak, Soojong

    2016-11-01

    We describe the characteristics and performance of a camera system, Spectral energy distribution Camera for Quasars in Early Universe (SQUEAN). It was developed to measure SEDs of high-redshift quasar candidates (z ≳ 5) and other targets, e.g., young stellar objects, supernovae, and gamma-ray bursts, and to trace the time variability of SEDs of objects such as active galactic nuclei (AGNs). SQUEAN consists of an on-axis focal plane camera module, an autoguiding system, and mechanical supporting structures. The science camera module is composed of a focal reducer, a customizable filter wheel, and a CCD camera on the focal plane. The filter wheel uses filter cartridges that can house filters with different shapes and sizes, enabling the filter wheel to hold 20 filters of 50 mm × 50 mm size, 10 filters of 86 mm × 86 mm size, or many other combinations. The initial filter mask was applied to calibrate the filter wheel with high accuracy, and we verified that the filter position is repeatable at much less than one pixel accuracy. We installed and tested 50 nm medium bandwidth filters of 600-1050 nm and other filters at the commissioning observation in 2015 February. We found that SQUEAN can reach limiting magnitudes of 23.3-25.3 AB mag at 5σ in a one-hour total integration time.

  1. A Mobile Robot Localization via Indoor Fixed Remote Surveillance Cameras.

    Science.gov (United States)

    Shim, Jae Hong; Cho, Young Im

    2016-02-04

    Localization, which is a technique required by service robots to operate indoors, has been studied in various ways. Most localization techniques have the robot measure environmental information to obtain location information; however, this is a high-cost option because it uses extensive equipment and complicates robot development. If an external device is used to determine a robot's location and transmit this information to the robot, the cost of internal equipment required for location recognition can be reduced. This will simplify robot development. Thus, this study presents an effective method to control robots by obtaining their location information using a map constructed by visual information from surveillance cameras installed indoors. With only a single image of an object, it is difficult to gauge its size due to occlusion. Therefore, we propose a localization method using several neighboring surveillance cameras. A two-dimensional map containing robot and object position information is constructed using images of the cameras. The concept of this technique is based on modeling the four edges of the projected image of the field of coverage of the camera and an image processing algorithm of the finding object's center for enhancing the location estimation of objects of interest. We experimentally demonstrate the effectiveness of the proposed method by analyzing the resulting movement of a robot in response to the location information obtained from the two-dimensional map. The accuracy of the multi-camera setup was measured in advance.

  2. Development of a Wireless Video Transfer System for Remote Control of a Lightweight UAV

    OpenAIRE

    Tosteberg, Joakim; Axelsson, Thomas

    2012-01-01

    A team of developers from Epsilon AB has developed a lightweight remote controlledquadcopter named Crazyflie. The team wants to allow a pilot to navigate thequadcopter using video from an on-board camera as the only guidance. The masterthesis evaluates the feasibility of mounting a camera module on the quadcopter andstreaming images from the camera to a computer, using the existing quadcopterradio link. Using theoretical calculations and measurements, a set of requirementsthat must be fulfill...

  3. Color spaces in digital video

    Energy Technology Data Exchange (ETDEWEB)

    Gaunt, R.

    1997-05-01

    . For example, humans `see` more white-to-black (luminance) detail then red, green, or blue color detail. Also, the eye is most sensitive to green colors. Taking advantage of this, both composite and component video allocates more bandwidth for the luma (Y`) signal than the chroma signals. Y`611 is composed of 59% green`, 30% red`, and 11% blue` (prime symbol denotes gamma corrected colors). This luma signal also maintains compatibility with black and white television receivers. Component digital video converts R`G`B` signals (either from a camera or a computer) to a monochromatic brightness signal Y` (referred here as luma to distinguish it from the CIE luminance linear- light quantity), and two color difference signals Cb and Cr. These last two are the blue and red signals with the luma component subtracted out. As you know, computer graphic images are composed of red, green, and blue elements defined in a linear color space. Color monitors do not display RGB linearly. A linear RGB color space image must be gamma corrected to be displayed properly on a CRT. Gamma correction, which is approximately a 0.45 power function, must also be employed before converting an RGB image to video color space. Gamma correction is defined for video in the international standard: ITU-Rec. BT.709-4. The gamma correction transform is the same for red, green, and blue. The color coding standard for component digital video and high definition video symbolizes gamma corrected luma by Y`, the blue difference signal by Cb (Cb = B` -Y`), and the red color difference signal by Cr (Cr = R` - Y`). Component analog HDTV uses Y`PbPr. To reduce conversion errors, clip in R`G`B`, not in Y`CbCr space. View video on a video monitor, computer monitor phosphors are wrong. Use a large word size (double precision) to avoid warp around, the0232n round the results to values between 0 and 255. And finally, recall that multiplying two 8- bit numbers results in a 16-bit number, so values need to be clipped to 8

  4. Video visual analytics

    OpenAIRE

    Höferlin, Markus Johannes

    2013-01-01

    The amount of video data recorded world-wide is tremendously growing and has already reached hardly manageable dimensions. It originates from a wide range of application areas, such as surveillance, sports analysis, scientific video analysis, surgery documentation, and entertainment, and its analysis represents one of the challenges in computer science. The vast amount of video data renders manual analysis by watching the video data impractical. However, automatic evaluation of video material...

  5. Real-time object tracking for moving target auto-focus in digital camera

    Science.gov (United States)

    Guan, Haike; Niinami, Norikatsu; Liu, Tong

    2015-02-01

    Focusing at a moving object accurately is difficult and important to take photo of the target successfully in a digital camera. Because the object often moves randomly and changes its shape frequently, position and distance of the target should be estimated at real-time so as to focus at the objet precisely. We propose a new method of real-time object tracking to do auto-focus for moving target in digital camera. Video stream in the camera is used for the moving target tracking. Particle filter is used to deal with problem of the target object's random movement and shape change. Color and edge features are used as measurement of the object's states. Parallel processing algorithm is developed to realize real-time particle filter object tracking easily in hardware environment of the digital camera. Movement prediction algorithm is also proposed to remove focus error caused by difference between tracking result and target object's real position when the photo is taken. Simulation and experiment results in digital camera demonstrate effectiveness of the proposed method. We embedded real-time object tracking algorithm in the digital camera. Position and distance of the moving target is obtained accurately by object tracking from the video stream. SIMD processor is applied to enforce parallel real-time processing. Processing time less than 60ms for each frame is obtained in the digital camera with its CPU of only 162MHz.

  6. Citizen Camera-Witnessing: A Case Study of the Umbrella Movement

    Directory of Open Access Journals (Sweden)

    Wai Han Lo

    2016-08-01

    Full Text Available Citizen camera-witness is a new concept by which to describe using mobile camera phone to engage in civic expression. I argue that the meaning of this concept should not be limited to painful testimony; instead, it is a mode of civic camera-mediated mass self-testimony to brutality. The use of mobile phone recordings in Hong Kong’s Umbrella Movement is examined to understand how mobile cameras are employed as personal witnessing devices to provide recordings to indict unjust events and engage others in the civic movement. This study has examined the Facebook posts and You Tube videos of the Umbrella Movement between September 22, 2014 and December 22, 2014. The results suggest that the camera phone not only contributes to witnessing the brutal repression of the state, but also witnesses the beauty of the movement, and provides a testimony that allows for rituals to develop and semi-codes to be transformed.

  7. NOAA Point Shapefile - Drop Camera transects, US Caribbean - Virgin Passage and St. John Shelf - Project NF-03-10-USVI-HAB - (2010), UTM 20N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. For more information about this...

  8. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  9. INCREMENTAL REAL-TIME BUNDLE ADJUSTMENT FOR MULTI-CAMERA SYSTEMS WITH POINTS AT INFINITY

    Directory of Open Access Journals (Sweden)

    J. Schneider

    2013-08-01

    Full Text Available This paper presents a concept and first experiments on a keyframe-based incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. In order to avoid periodic batch steps, we use the software iSAM2 for sparse nonlinear incremental optimization, which is highly efficient through incremental variable reordering and fluid relinearization. We adapted the software to allow for (1 multi-view cameras by taking the rigid transformation between the cameras into account, (2 omnidirectional cameras as it can handle arbitrary bundles of rays and (3 scene points at infinity, which improve the estimation of the camera orientation as points at the horizon can be observed over long periods of time. The real-time bundle adjustment refers to sets of keyframes, consisting of frames, one per camera, taken in a synchronized way, that are initiated if a minimal geometric distance to the last keyframe set is exceeded. It uses interest points in the keyframes as observations, which are tracked in the synchronized video streams of the individual cameras and matched across the cameras, if possible. First experiments show the potential of the incremental bundle adjustment w.r.t. time requirements. Our experiments are based on a multi-camera system with four fisheye cameras, which are mounted on a UAV as two stereo pairs, one looking ahead and one looking backwards, providing a large field of view.

  10. Installing Piezometers in Deepwater Sediments

    Science.gov (United States)

    Lee, David R.; Harvey, F. Edwin

    1996-04-01

    A new method has been developed for installing piezometers in the sediments of deep harbors and lakes, where it has been difficult to measure hydrogeological parameters and collect pore waters for geochemical analyses. Using an underwater hammer operated from the surface, piezometers have been installed as much as 12 m below the water-sediment interface. The piezometer screen is held in place by barbs on the drive head and is connected to the water surface via flexible tubing. These piezometers have been installed from boats and from ice cover in water up to 30 m deep. However, installation in water more than 100 m deep should be possible.

  11. ROS Installation and Commissioning

    CERN Multimedia

    Gorini, B

    The ATLAS Readout group (a sub-group of TDAQ) has now completed the installation and commissioning of all of the Readout System (ROS) units. Event data from ATLAS is initially handled by detector specific hardware and software, but following a Level 1 Accept the data passes from the detector specific Readout Drivers (RODs) to the ROS, the first stage of the central ATLAS DAQ. Within the final ATLAS TDAQ system the ROS stores the data and on request makes it available to the Level 2 Trigger (L2) processors and to the Event Builder (EB) as required. The ROS is implemented as a large number of PCs housing custom built cards (ROBINs) and running custom multi-threaded software. Each ROBIN card (shown below) contains buffer memories to store the data, plus a field programmable gate array ( FPGA ) and an embedded PowerPC processor for management of the memories and data requests, and is implemented as a 64-bit 66 MHz PCI card. Both the software and the ROBIN cards have been designed and developed by the Readout g...

  12. A video of Mixed Interaction Space video

    DEFF Research Database (Denmark)

    Lykke, Olesen, Andreas; Hansen, Thomas Riisgaard; Eriksson, Eva

    Mixed Interaction Space is a new concept that uses the mobile phone to interact with either applications on the phone or in the environment by tracking the position and rotation with the camera in 4 dimmension. Most mobile devices today has a camera onboard. In the project about Mixed Interaction...

  13. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    Directory of Open Access Journals (Sweden)

    Sergio Orts-Escolano

    2014-04-01

    Full Text Available In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units. It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

  14. Visual Acuity and Contrast Sensitivity with compressed motion video

    NARCIS (Netherlands)

    Bijl, P.; Vries, S.C. de

    2009-01-01

    Video of Visual Acuity (VA) and Contrast Sensitivity (CS) test charts in a complex background was recorded using a CCD camera mounted on a computer-controlled tripod and fed into real-time MPEG2 compression/decompression equipment. The test charts were based on the Triangle Orientation

  15. Interactive Visualization of Video Data for Fish Population Monitoring

    NARCIS (Netherlands)

    E.M.A.L. Beauxis-Aussalet (Emmanuelle); L. Hardman (Lynda); D. van Leeuwen (Dennis); P.J. Stappers; M. H. Lamers; M.J.M.R. Thissen

    2014-01-01

    htmlabstractThe recent use of computer vision techniques for monitoring ecosystems has opened new perspectives for marine ecology research. These techniques can extract information about fish populations from in-situ cameras, without requiring ecologists to watch the videos. However, they

  16. Bodily Explorations in Space: Social Experience of a Multimodal Art Installation

    Science.gov (United States)

    Jacucci, Giulio; Spagnolli, Anna; Chalambalakis, Alessandro; Morrison, Ann; Liikkanen, Lassi; Roveda, Stefano; Bertoncini, Massimo

    We contribute with an extensive field study of a public interactive art installation that applies multimodal interface technologies. The installation is part of a Theater production on Galileo Galilei and includes: projected galaxies that are generated and move according to motion of visitors changing colour depending on their voices; projected stars that configure themselves around shadows of visitors. In the study we employ emotion scales (PANAS), qualitative analysis of questionnaire answers and video-recordings. PANAS rates indicate dominantly positive feelings, further described in the subjective verbalizations as gravitating around interest, ludic pleasure and transport. Through the video analysis, we identified three phases in the interaction with the artwork (circumspection, testing, play) and two pervasive features of these phases (experience sharing and imitation), which were also found in the verbalizations. Both video and verbalisations suggest that visitor’s experience and ludic pleasure are rooted in the embodied, performative interaction with the installation, and is negotiated with the other visitors.

  17. Creating small-format camera coordinates: 3 techniques for interior orientation

    Science.gov (United States)

    Warner, William S.

    1993-10-01

    Interior orientation (the initial step in comparator-based photogrammetry) requires a set of known photo-coordinates to act as reference points. Fiducials serve this purpose on metric survey cameras as do reseau targets on small-format, semi-metric cameras. Standard small cameras, however, are equipped with neither fiducials nor a reseau plate. To overcome this problem three attempts were made to create small-format camera coordinates. In the first method a clear template with four calibrated cross-hair targets was superimposed on a 35 mm diapositive (after the film was exposed). The second technique installed fiducials in the 35 mm camera body. In the third technique, original 645 imagery was custom enlarged so as to expose the entire frame edge on the print.

  18. Sending Safety Video over WiMAX in Vehicle Communications

    Directory of Open Access Journals (Sweden)

    Jun Steed Huang

    2013-10-01

    Full Text Available This paper reports on the design of an OPNET simulation platform to test the performance of sending real-time safety video over VANET (Vehicular Adhoc NETwork using the WiMAX technology. To provide a more realistic environment for streaming real-time video, a video model was created based on the study of video traffic traces captured from a realistic vehicular camera, and different design considerations were taken into account. A practical controller over real-time streaming protocol is implemented to control data traffic congestion for future road safety development. Our driving video model was then integrated with the WiMAX OPNET model along with a mobility model based on real road maps. Using this simulation platform, different mobility cases have been studied and the performance evaluated in terms of end-to-end delay, jitter and visual experience.

  19. Secure and Efficient Reactive Video Surveillance for Patient Monitoring

    Directory of Open Access Journals (Sweden)

    An Braeken

    2016-01-01

    Full Text Available Video surveillance is widely deployed for many kinds of monitoring applications in healthcare and assisted living systems. Security and privacy are two promising factors that align the quality and validity of video surveillance systems with the caliber of patient monitoring applications. In this paper, we propose a symmetric key-based security framework for the reactive video surveillance of patients based on the inputs coming from data measured by a wireless body area network attached to the human body. Only authenticated patients are able to activate the video cameras, whereas the patient and authorized people can consult the video data. User and location privacy are at each moment guaranteed for the patient. A tradeoff between security and quality of service is defined in order to ensure that the surveillance system gets activated even in emergency situations. In addition, the solution includes resistance against tampering with the device on the patient’s side.

  20. Eye-Movement Tracking Using Compressed Video Images

    Science.gov (United States)

    Mulligan, Jeffrey B.; Beutter, Brent R.; Hull, Cynthia H. (Technical Monitor)

    1994-01-01

    Infrared video cameras offer a simple noninvasive way to measure the position of the eyes using relatively inexpensive equipment. Several commercial systems are available which use special hardware to localize features in the image in real time, but the constraint of realtime performance limits the complexity of the applicable algorithms. In order to get better resolution and accuracy, we have used off-line processing to apply more sophisticated algorithms to the images. In this case, a major technical challenge is the real-time acquisition and storage of the video images. This has been solved using a strictly digital approach, exploiting the burgeoning field of hardware video compression. In this paper we describe the algorithms we have developed for tracking the movements of the eyes in video images, and present experimental results showing how the accuracy is affected by the degree of video compression.