WorldWideScience

Sample records for video camera mounted

  1. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    Science.gov (United States)

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  2. [Electro-mechanic steering device for head-lamp mounted miniature video cameras].

    Science.gov (United States)

    Ilgner, J; Westhofen, M

    2003-05-01

    Endoscopic or microscopic video recordings set a widely established standard for medico-legal documentation of operative procedures. In addition, they are an essential part of undergraduate as well as postgraduate medical education. Macroscopic operations in the head and neck can be recorded by miniaturised video cameras attached to the surgeon's head lamp. The authors present an electro-mechanic steering device which has been designed to overcome the parallax error created with a head-mounted video camera, especially as the distance of the camera to the operative field varies. The device can be operated by the theatre staff, while the sterility of the operative field is maintained and the surgeon's physical working range remains unrestricted. As the video image is reliably centred to the operative field throughout the procedure, a better orientation and understanding for spectators who are unfamiliar with the surgical steps is obtained. While other adverse factors to macroscopic head-mounted video recordings, such as involuntary head movements of the surgeon, remain unchanged, the device adds to a higher quality of video documentation as it relieves the surgeon from adjusting the image field to the regions of interest. Additional benefit could be derived from an auto-focus feature or from image stabilising devices.

  3. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room.

    Science.gov (United States)

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana

    2015-10-01

    To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  4. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    Directory of Open Access Journals (Sweden)

    Akshay Gopinathan Nair

    2015-01-01

    Full Text Available Objective: To study the utility of a commercially available small, portable ultra-high definition (HD camera (GoPro Hero 4 for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon′s head. Due care was taken to protect the patient′s identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each. The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon′s head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  5. Studying complex decision making in natural settings: using a head-mounted video camera to study competitive orienteering.

    Science.gov (United States)

    Omodei, M M; McLennan, J

    1994-12-01

    Head-mounted video recording is described as a potentially powerful method for studying decision making in natural settings. Most alternative data-collection procedures are intrusive and disruptive of the decision-making processes involved while conventional video-recording procedures are either impractical or impossible. As a severe test of the robustness of the methodology we studied the decision making of 6 experienced orienteers who carried a head-mounted light-weight video camera as they navigated, running as fast as possible, around a set of control points in a forest. Use of the Wilcoxon matched-pairs signed-ranks test indicated that compared with free recall, video-assisted recall evoked (a) significantly greater experiential immersion in the recall, (b) significantly more specific recollections of navigation-related thoughts and feelings, (c) significantly more realizations of map and terrain features and aspects of running speed which were not noticed at the time of actual competition, and (d) significantly greater insight into specific navigational errors and the intrusion of distracting thoughts into the decision-making process. Potential applications of the technique in (a) the environments of emergency services, (b) therapeutic contexts, (c) education and training, and (d) sports psychology are discussed.

  6. Use of a UAV-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Science.gov (United States)

    Interest in use of unmanned aerial vehicles in science has increased in recent years. It is predicted that they will be a preferred remote sensing platform for applications that inform sustainable rangeland management in the future. The objective of this study was to determine whether UAV video moni...

  7. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  8. Scanning laser video camera/ microscope

    Science.gov (United States)

    Wang, C. P.; Bow, R. T.

    1984-10-01

    A laser scanning system capable of scanning at standard video rate has been developed. The scanning mirrors, circuit design and system performance, as well as its applications to video cameras and ultra-violet microscopes, are discussed.

  9. Use of an unmanned aerial vehicle-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Science.gov (United States)

    We determined the feasibility of using unmanned aerial vehicle (UAV) video monitoring to predict intake of discrete food items of rangeland-raised Raramuri Criollo non-nursing beef cows. Thirty-five cows were released into a 405-m2 rectangular dry lot, either in pairs (pilot tests) or individually (...

  10. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    Science.gov (United States)

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  11. Controlled Impact Demonstration (CID) tail camera video

    Science.gov (United States)

    1984-01-01

    The Controlled Impact Demonstration (CID) was a joint research project by NASA and the FAA to test a survivable aircraft impact using a remotely piloted Boeing 720 aircraft. The tail camera movie is one shot running 27 seconds. It shows the impact from the perspective of a camera mounted high on the vertical stabilizer, looking forward over the fuselage and wings.

  12. Face identification in videos from mobile cameras

    OpenAIRE

    Mu, Meiru; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2014-01-01

    It is still challenging to recognize faces reliably in videos from mobile camera, although mature automatic face recognition technology for still images has been available for quite some time. Suppose we want to be alerted when suspects appear in the recording of a police Body-Cam, even a good face matcher on still images would give many false alarms due to the uncontrolled conditions. This paper presents an approach to identify faces in videos from mobile cameras. A commercial face matcher F...

  13. Using a Head-Mounted Camera to Infer Attention Direction

    Science.gov (United States)

    Schmitow, Clara; Stenberg, Gunilla; Billard, Aude; von Hofsten, Claes

    2013-01-01

    A head-mounted camera was used to measure head direction. The camera was mounted to the forehead of 20 6- and 20 12-month-old infants while they watched an object held at 11 horizontal (-80° to + 80°) and 9 vertical (-48° to + 50°) positions. The results showed that the head always moved less than required to be on target. Below 30° in the…

  14. Face identification in videos from mobile cameras

    NARCIS (Netherlands)

    Mu, Meiru; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2014-01-01

    It is still challenging to recognize faces reliably in videos from mobile camera, although mature automatic face recognition technology for still images has been available for quite some time. Suppose we want to be alerted when suspects appear in the recording of a police Body-Cam, even a good face

  15. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    Science.gov (United States)

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  16. Performance Test of the First Prototype of 2 Ways Video Camera for the Muon Barrel Position Monitor

    CERN Document Server

    Brunel, Laurent; Bondar, Tamas; Bencze, Gyorgy; Raics, Peter; Szabó, Jozsef

    1998-01-01

    The CMS Barrel Position Monitor is based on 360 video cameras mounted on 36 very stable mechanical structures. One type of camera is used to observe optical sources mounted on the muon chambers. A first prototype was produced to test the main performances. This report gives the experimental results about stability, linearity and temperature effects.

  17. Precision analysis of triangulations using forward-facing vehicle-mounted cameras for augmented reality applications

    Science.gov (United States)

    Schmid, Stephan; Fritsch, Dieter

    2017-06-01

    One crucial ingredient for augmented reality application is having or obtaining information about the environment. In this paper, we examine the case of an augmented video application for vehicle-mounted cameras facing forward. In particular, we examine the method of obtaining geometry information of the environment via stereo computation / structure from motion. A detailed analysis of the geometry of the problem is provided, in particular of the singularity in front of the vehicle. For typical scenes, we compare monocular configurations with stereo configurations subject to the packaging constraints of forward-facing cameras in consumer vehicles.

  18. A method to synchronise video cameras using the audio band.

    Science.gov (United States)

    Leite de Barros, Ricardo Machado; Guedes Russomanno, Tiago; Brenzikofer, René; Jovino Figueroa, Pascual

    2006-01-01

    This paper proposes and evaluates a novel method for synchronisation of video cameras using the audio band. The method consists in generating and transmitting an audio signal through radio frequency for receivers connected to the microphone input of the cameras and inserting the signal in the audio band. In a software environment, the phase differences among the video signals are calculated and used to interpolate the synchronous 2D projections of the trajectories. The validation of the method was based on: (1) Analysis of the phase difference changes as a function of time of two video signals. (2) Comparison between the values measured with an oscilloscope and by the proposed method. (3) Estimation of the improvement in the accuracy in the measurements of the distance between two markers mounted on a rigid body during movement applying the method. The results showed that the phase difference changes in time slowly (0.150 ms/min) and linearly, even when the same model of cameras are used. The values measured by the proposed method and by oscilloscope showed equivalence (R2=0.998), the root mean square of the difference between the measurements was 0.10 ms and the maximum difference found was 0.31 ms. Applying the new method, the accuracy of the 3D reconstruction had a statistically significant improvement. The accuracy, simplicity and wide applicability of the proposed method constitute the main contributions of this work.

  19. Video inpainting under constrained camera motion.

    Science.gov (United States)

    Patwardhan, Kedar A; Sapiro, Guillermo; Bertalmío, Marcelo

    2007-02-01

    A framework for inpainting missing parts of a video sequence recorded with a moving or stationary camera is presented in this work. The region to be inpainted is general: it may be still or moving, in the background or in the foreground, it may occlude one object and be occluded by some other object. The algorithm consists of a simple preprocessing stage and two steps of video inpainting. In the preprocessing stage, we roughly segment each frame into foreground and background. We use this segmentation to build three image mosaics that help to produce time consistent results and also improve the performance of the algorithm by reducing the search space. In the first video inpainting step, we reconstruct moving objects in the foreground that are "occluded" by the region to be inpainted. To this end, we fill the gap as much as possible by copying information from the moving foreground in other frames, using a priority-based scheme. In the second step, we inpaint the remaining hole with the background. To accomplish this, we first align the frames and directly copy when possible. The remaining pixels are filled in by extending spatial texture synthesis techniques to the spatiotemporal domain. The proposed framework has several advantages over state-of-the-art algorithms that deal with similar types of data and constraints. It permits some camera motion, is simple to implement, fast, does not require statistical models of background nor foreground, works well in the presence of rich and cluttered backgrounds, and the results show that there is no visible blurring or motion artifacts. A number of real examples taken with a consumer hand-held camera are shown supporting these findings.

  20. A single pixel camera video ophthalmoscope

    Science.gov (United States)

    Lochocki, B.; Gambin, A.; Manzanera, S.; Irles, E.; Tajahuerce, E.; Lancis, J.; Artal, P.

    2017-02-01

    There are several ophthalmic devices to image the retina, from fundus cameras capable to image the whole fundus to scanning ophthalmoscopes with photoreceptor resolution. Unfortunately, these devices are prone to a variety of ocular conditions like defocus and media opacities, which usually degrade the quality of the image. Here, we demonstrate a novel approach to image the retina in real-time using a single pixel camera, which has the potential to circumvent those optical restrictions. The imaging procedure is as follows: a set of spatially coded patterns is projected rapidly onto the retina using a digital micro mirror device. At the same time, the inner product's intensity is measured for each pattern with a photomultiplier module. Subsequently, an image of the retina is reconstructed computationally. Obtained image resolution is up to 128 x 128 px with a varying real-time video framerate up to 11 fps. Experimental results obtained in an artificial eye confirm the tolerance against defocus compared to a conventional multi-pixel array based system. Furthermore, the use of a multiplexed illumination offers a SNR improvement leading to a lower illumination of the eye and hence an increase in patient's comfort. In addition, the proposed system could enable imaging in wavelength ranges where cameras are not available.

  1. VIDEO-BASED POINT CLOUD GENERATION USING MULTIPLE ACTION CAMERAS

    Directory of Open Access Journals (Sweden)

    T. Teo

    2015-05-01

    Full Text Available Due to the development of action cameras, the use of video technology for collecting geo-spatial data becomes an important trend. The objective of this study is to compare the image-mode and video-mode of multiple action cameras for 3D point clouds generation. Frame images are acquired from discrete camera stations while videos are taken from continuous trajectories. The proposed method includes five major parts: (1 camera calibration, (2 video conversion and alignment, (3 orientation modelling, (4 dense matching, and (5 evaluation. As the action cameras usually have large FOV in wide viewing mode, camera calibration plays an important role to calibrate the effect of lens distortion before image matching. Once the camera has been calibrated, the author use these action cameras to take video in an indoor environment. The videos are further converted into multiple frame images based on the frame rates. In order to overcome the time synchronous issues in between videos from different viewpoints, an additional timer APP is used to determine the time shift factor between cameras in time alignment. A structure form motion (SfM technique is utilized to obtain the image orientations. Then, semi-global matching (SGM algorithm is adopted to obtain dense 3D point clouds. The preliminary results indicated that the 3D points from 4K video are similar to 12MP images, but the data acquisition performance of 4K video is more efficient than 12MP digital images.

  2. Helmet-mounted uncooled FPA camera for buried object detection

    Science.gov (United States)

    Miller, John L.; Duvoisin, Herbert A., III; Wiltsey, George

    1997-08-01

    Software neural nets hosted on a parallel processor can analyze input from an IR imager to evaluate the likelihood of a buried object. However, it is only recently that low weight, staring LWIR sensors have become available in uncooled formats at sensitivities that provide enough information for useful man-portable helmet mounted applications. The images from the IR are presented to a human user through a see-through display after processing and highlighting by a neural net housed in a fanny-pack. This paper describes the phenomenology of buried object detection in the infrared, the neural net based image processing, the helmet mounted IR sensor and the ergonomics of mounting a sensor to head gear. The maturing and commercialization of uncooled focal plane arrays and high density electronics enables lightweight, low cost, small camera packages that can be integrated with hard hats and military helmets. The head gear described has a noise equivalent delta temperature (NEDT) of less than 50 milliKelvin, consumes less than 10 watts and weighs about 1.5 kilograms.

  3. Initial laboratory evaluation of color video cameras: Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  4. Camera Control and Geo-Registration for Video Sensor Networks

    Science.gov (United States)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  5. Use of infrared TV cameras built into head-mounted display to measure torsional eye movements.

    Science.gov (United States)

    Ukai, K; Saida, S; Ishikawa, N

    2001-01-01

    The head-mounted display (HMD) has produced conflict between visual and vestibular stimuli because the HMD image does not move with the head motion of the wearer. The HMD can show binocular parallax three-dimensional (3D) images, in which vergence and accommodation conflict. Thus, the HMD may affect the normal visual/vestibular functions. We attempted to develop a system that makes possible the measurement of torsional eye movements, vergence eye movements, and pupillary responses of the HMD wearer. Our apparatus is composed of two infrared CCD cameras installed in the HMD. Iris images produced by these cameras are analyzed by a personal computer using free software. Further, a third camera fixed on the HMD projects an image of the view as the subject sees it, via video tape recorder or frame memory to the HMD. Images can be stored, replayed, or frozen. Our system can measure torsional eye movement with 0.20 degrees resolution every 1/30 (or 1/60) seconds even though the pupil size alternates during measurement. Binocular eye movement and pupillary response are also measured. A system was developed which can be used for assessment of the effect of 3D HMD on the visual system. A third camera coupled with HMD can control visual stimulus independently of head motion (vestibular stimulus).

  6. Camcorder 101: Buying and Using Video Cameras.

    Science.gov (United States)

    Catron, Louis E.

    1991-01-01

    Lists nine practical applications of camcorders to theater companies and programs. Discusses the purchase of video gear, camcorder features, accessories, the use of the camcorder in the classroom, theater management, student uses, and video production. (PRA)

  7. Analysis of unstructured video based on camera motion

    Science.gov (United States)

    Abdollahian, Golnaz; Delp, Edward J.

    2007-01-01

    Although considerable work has been done in management of "structured" video such as movies, sports, and television programs that has known scene structures, "unstructured" video analysis is still a challenging problem due to its unrestricted nature. The purpose of this paper is to address issues in the analysis of unstructured video and in particular video shot by a typical unprofessional user (i.e home video). We describe how one can make use of camera motion information for unstructured video analysis. A new concept, "camera viewing direction," is introduced as the building block of home video analysis. Motion displacement vectors are employed to temporally segment the video based on this concept. We then find the correspondence between the camera behavior with respect to the subjective importance of the information in each segment and describe how different patterns in the camera motion can indicate levels of interest in a particular object or scene. By extracting these patterns, the most representative frames, keyframes, for the scenes are determined and aggregated to summarize the video sequence.

  8. Demonstrations of Optical Spectra with a Video Camera

    Science.gov (United States)

    Kraftmakher, Yaakov

    2012-01-01

    The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera…

  9. Surgical video recording with a modified GoPro Hero 4 camera

    Directory of Open Access Journals (Sweden)

    Lin LK

    2016-01-01

    Full Text Available Lily Koo Lin Department of Ophthalmology and Vision Science, University of California, Davis Eye Center, Sacramento, CA, USA Background: Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Method: The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Results: Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. Conclusion: The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination. Keywords: teaching, oculoplastic, strabismus

  10. Surgical video recording with a modified GoPro Hero 4 camera.

    Science.gov (United States)

    Lin, Lily Koo

    2016-01-01

    Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination.

  11. Automatic Person Identification in Camera Video by Motion Correlation

    Directory of Open Access Journals (Sweden)

    Dingbo Duan

    2014-01-01

    Full Text Available Person identification plays an important role in semantic analysis of video content. This paper presents a novel method to automatically label persons in video sequence captured from fixed camera. Instead of leveraging traditional face recognition approaches, we deal with the task of person identification by fusing information from motion sensor platforms, like smart phones, carried on human bodies and extracted from camera video. More specifically, a sequence of motion features extracted from camera video are compared with each of those collected from accelerometers of smart phones. When strong correlation is detected, identity information transmitted from the corresponding smart phone is used to identify the phone wearer. To test the feasibility and efficiency of the proposed method, extensive experiments are conducted which achieved impressive performance.

  12. Improving photometric calibration of meteor video camera systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-09-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag , and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  13. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  14. Geometrical modelling and calibration of video cameras for underwater navigation

    Energy Technology Data Exchange (ETDEWEB)

    Melen, T.

    1994-11-01

    Video cameras and other visual sensors can provide valuable navigation information for underwater remotely operated vehicles. The thesis relates to the geometric modelling and calibration of video cameras. To exploit the accuracy potential of a video camera, all systematic errors must be modelled and compensated for. This dissertation proposes a new geometric camera model, where linear image plane distortion (difference in scale and lack of orthogonality between the image axes) is compensated for after, and separately from, lens distortion. The new model can be viewed as an extension of the linear or DLT (Direct Linear Transformation) model and as a modification of the model traditionally used in photogrammetry. The new model can be calibrated from both planar and nonplanar calibration objects. The feasibility of the model is demonstrated in a typical camera calibration experiment, which indicates that the new model is more accurate than the traditional one. It also gives a simple solution to the problem of computing undistorted image coordinates from distorted ones. Further, the dissertation suggests how to get initial estimates for all the camera model parameters, how to select the number of parameters modelling lens distortion and how to reduce the dimension of the search space in the nonlinear optimization. There is also a discussion on the use of analytical partial derivates. The new model is particularly well suited for video images with non-square pixels, but it may also advantagely be used with professional photogrammetric equipment. 63 refs., 11 figs., 6 tabs.

  15. Court Reconstruction for Camera Calibration in Broadcast Basketball Videos.

    Science.gov (United States)

    Wen, Pei-Chih; Cheng, Wei-Chih; Wang, Yu-Shuen; Chu, Hung-Kuo; Tang, Nick C; Liao, Hong-Yuan Mark

    2016-05-01

    We introduce a technique of calibrating camera motions in basketball videos. Our method particularly transforms player positions to standard basketball court coordinates and enables applications such as tactical analysis and semantic basketball video retrieval. To achieve a robust calibration, we reconstruct the panoramic basketball court from a video, followed by warping the panoramic court to a standard one. As opposed to previous approaches, which individually detect the court lines and corners of each video frame, our technique considers all video frames simultaneously to achieve calibration; hence, it is robust to illumination changes and player occlusions. To demonstrate the feasibility of our technique, we present a stroke-based system that allows users to retrieve basketball videos. Our system tracks player trajectories from broadcast basketball videos. It then rectifies the trajectories to a standard basketball court by using our camera calibration method. Consequently, users can apply stroke queries to indicate how the players move in gameplay during retrieval. The main advantage of this interface is an explicit query of basketball videos so that unwanted outcomes can be prevented. We show the results in Figs. 1, 7, 9, 10 and our accompanying video to exhibit the feasibility of our technique.

  16. Ball lightning observation: an objective video-camera analysis report

    OpenAIRE

    Sello, Stefano; Viviani, Paolo; Paganini, Enrico

    2011-01-01

    In this paper we describe a video-camera recording of a (probable) ball lightning event and both the related image and signal analyses for its photometric and dynamical characterization. The results strongly support the BL nature of the recorded luminous ball object and allow the researchers to have an objective and unique video document of a possible BL event for further analyses. Some general evaluations of the obtained results considering the proposed ball lightning models conclude the paper.

  17. Real-Time Facial Expression Transfer with Single Video Camera

    OpenAIRE

    Liu, S.; Yang, Xiaosong; Wang, Z.; Xiao, Zhidong; Zhang, J.

    2016-01-01

    Facial expression transfer is currently an active research field. However, 2D image wrapping based methods suffer from depth ambiguity and specific hardware is required for depth-based methods to work. We present a novel markerless, real time online facial transfer method that requires only a single video camera. Our method adapts a model to user specific facial data, computes expression variances in real time and rapidly transfers them to another target. Our method can be applied to videos w...

  18. Solid-State Video Camera for the Accelerator Environment

    Energy Technology Data Exchange (ETDEWEB)

    Brown, R

    2004-05-27

    Solid-State video cameras employing CMOS technology have been developed and tested for several years in the SLAC accelerator; notably the PEPII (BaBar) injection lines. They have proven much more robust than their CCD counterparts in radiation areas. Repair is simple, inexpensive, and generates very little radioactive waste.

  19. Markerless Augmented Reality via Stereo Video See-Through Head-Mounted Display Device

    Directory of Open Access Journals (Sweden)

    Chung-Hung Hsieh

    2015-01-01

    Full Text Available Conventionally, the camera localization for augmented reality (AR relies on detecting a known pattern within the captured images. In this study, a markerless AR scheme has been designed based on a Stereo Video See-Through Head-Mounted Display (HMD device. The proposed markerless AR scheme can be utilized for medical applications such as training, telementoring, or preoperative explanation. Firstly, a virtual model for AR visualization is aligned to the target in physical space by an improved Iterative Closest Point (ICP based surface registration algorithm, with the target surface structure reconstructed by a stereo camera pair; then, a markerless AR camera localization method is designed based on the Kanade-Lucas-Tomasi (KLT feature tracking algorithm and the Random Sample Consensus (RANSAC correction algorithm. Our AR camera localization method is shown to be better than the traditional marker-based and sensor-based AR environment. The demonstration system was evaluated with a plastic dummy head and the display result is satisfactory for a multiple-view observation.

  20. CameraCast: flexible access to remote video sensors

    Science.gov (United States)

    Kong, Jiantao; Ganev, Ivan; Schwan, Karsten; Widener, Patrick

    2007-01-01

    New applications like remote surveillance and online environmental or traffic monitoring are making it increasingly important to provide flexible and protected access to remote video sensor devices. Current systems use application-level codes like web-based solutions to provide such access. This requires adherence to user-level APIs provided by such services, access to remote video information through given application-specific service and server topologies, and that the data being captured and distributed is manipulated by third party service codes. CameraCast is a simple, easily used system-level solution to remote video access. It provides a logical device API so that an application can identically operate on local vs. remote video sensor devices, using its own service and server topologies. In addition, the application can take advantage of API enhancements to protect remote video information, using a capability-based model for differential data protection that offers fine grain control over the information made available to specific codes or machines, thereby limiting their ability to violate privacy or security constraints. Experimental evaluations of CameraCast show that the performance of accessing remote video information approximates that of accesses to local devices, given sufficient networking resources. High performance is also attained when protection restrictions are enforced, due to an efficient kernel-level realization of differential data protection.

  1. Teacher training for using digital video camera in primary education

    Directory of Open Access Journals (Sweden)

    Pablo García Sempere

    2011-12-01

    Full Text Available This paper shows the partial results of a research carried out in primary schools, which evaluates the ability of teachers in the use of digital video camera. The study took place in the province of Granada, Spain. Our purpose was to know the level of knowledge, interest, difficulties and training needs so as to improve the teaching practice. The work has been done from a descriptive and ecletic approach. Quantitative (questionnaire and qualitative techniques (focus group have been used in this research. The information obtained shows that most of the teachers have a lack of knowledge in the use of video camera and digital edition. On the other hand, the majority agrees to include initial and permanent training on this subject. Finally, the most important conclusions are presented.

  2. Machine vision: recent advances in CCD video camera technology

    Science.gov (United States)

    Easton, Richard A.; Hamilton, Ronald J.

    1997-09-01

    This paper describes four state-of-the-art digital video cameras, which provide advanced features that benefit computer image enhancement, manipulation, and analysis. These cameras were designed to reduce the complexity of imaging systems while increasing the accuracy, dynamic range, and detail enhancement of product inspections. Two cameras utilize progressive scan CCD sensors enabling the capture of high- resolution image of moving objects without the need for strobe lights or mechanical shutters. The second progressive scan camera has an unusually high resolution of 1280 by 1024 and a choice of serial or parallel digital interface for data and control. The other two cameras incorporate digital signal processing (DSP) technology for improved dynamic range, more accurate determination of color, white balance stability, and enhanced contrast of part features against the background. Successful applications and future product development trends are discussed. A brief description of analog and digital image capture devices will address the most common questions regarding interface requirements within a typical machine vision system overview.

  3. Passive millimeter-wave video camera for aviation applications

    Science.gov (United States)

    Fornaca, Steven W.; Shoucri, Merit; Yujiri, Larry

    1998-07-01

    Passive Millimeter Wave (PMMW) imaging technology offers significant safety benefits to world aviation. Made possible by recent technological breakthroughs, PMMW imaging sensors provide visual-like images of objects under low visibility conditions (e.g., fog, clouds, snow, sandstorms, and smoke) which blind visual and infrared sensors. TRW has developed an advanced, demonstrator version of a PMMW imaging camera that, when front-mounted on an aircraft, gives images of the forward scene at a rate and quality sufficient to enhance aircrew vision and situational awareness under low visibility conditions. Potential aviation uses for a PMMW camera are numerous and include: (1) Enhanced vision for autonomous take- off, landing, and surface operations in Category III weather on Category I and non-precision runways; (2) Enhanced situational awareness during initial and final approach, including Controlled Flight Into Terrain (CFIT) mitigation; (3) Ground traffic control in low visibility; (4) Enhanced airport security. TRW leads a consortium which began flight tests with the demonstration PMMW camera in September 1997. Flight testing will continue in 1998. We discuss the characteristics of PMMW images, the current state of the technology, the integration of the camera with other flight avionics to form an enhanced vision system, and other aviation applications.

  4. Outdoor Markerless Motion Capture With Sparse Handheld Video Cameras.

    Science.gov (United States)

    Wang, Yangang; Liu, Yebin; Tong, Xin; Dai, Qionghai; Tan, Ping

    2017-04-12

    We present a method for outdoor markerless motion capture with sparse handheld video cameras. In the simplest setting, it only involves two mobile phone cameras following the character. This setup can maximize the flexibilities of data capture and broaden the applications of motion capture. To solve the character pose under such challenge settings, we exploit the generative motion capture methods and propose a novel model-view consistency that considers both foreground and background in the tracking stage. The background is modeled as a deformable 2D grid, which allows us to compute the background-view consistency for sparse moving cameras. The 3D character pose is tracked with a global-local optimization through minimizing our consistency cost. A novel L1 motion regularizer is also proposed in the optimization to constrain the solution pose space. The whole process of the proposed method is simple as frame by frame video segmentation is not required. Our method outperforms several alternative methods on various examples demonstrated in the paper.

  5. Simultaneous Camera Path Optimization and Distraction Removal for Improving Amateur Video.

    Science.gov (United States)

    Zhang, Fang-Lue; Wang, Jue; Zhao, Han; Martin, Ralph R; Hu, Shi-Min

    2015-12-01

    A major difference between amateur and professional video lies in the quality of camera paths. Previous work on video stabilization has considered how to improve amateur video by smoothing the camera path. In this paper, we show that additional changes to the camera path can further improve video aesthetics. Our new optimization method achieves multiple simultaneous goals: 1) stabilizing video content over short time scales; 2) ensuring simple and consistent camera paths over longer time scales; and 3) improving scene composition by automatically removing distractions, a common occurrence in amateur video. Our approach uses an L(1) camera path optimization framework, extended to handle multiple constraints. Two passes of optimization are used to address both low-level and high-level constraints on the camera path. The experimental and user study results show that our approach outputs video that is perceptually better than the input, or the results of using stabilization only.

  6. Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses

    Science.gov (United States)

    Liu, Rong; Unger, John A.; Scullion, Vicki A.

    2014-01-01

    Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

  7. Scientists Behind the Camera - Increasing Video Documentation in the Field

    Science.gov (United States)

    Thomson, S.; Wolfe, J.

    2013-12-01

    Over the last two years, Skypunch Creative has designed and implemented a number of pilot projects to increase the amount of video captured by scientists in the field. The major barrier to success that we tackled with the pilot projects was the conflicting demands of the time, space, storage needs of scientists in the field and the demands of shooting high quality video. Our pilots involved providing scientists with equipment, varying levels of instruction on shooting in the field and post-production resources (editing and motion graphics). In each project, the scientific team was provided with cameras (or additional equipment if they owned their own), tripods, and sometimes sound equipment, as well as an external hard drive to return the footage to us. Upon receiving the footage we professionally filmed follow-up interviews and created animations and motion graphics to illustrate their points. We also helped with the distribution of the final product (http://climatescience.tv/2012/05/the-story-of-a-flying-hippo-the-hiaper-pole-to-pole-observation-project/ and http://climatescience.tv/2013/01/bogged-down-in-alaska/). The pilot projects were a success. Most of the scientists returned asking for additional gear and support for future field work. Moving out of the pilot phase, to continue the project, we have produced a 14 page guide for scientists shooting in the field based on lessons learned - it contains key tips and best practice techniques for shooting high quality footage in the field. We have also expanded the project and are now testing the use of video cameras that can be synced with sensors so that the footage is useful both scientifically and artistically. Extract from A Scientist's Guide to Shooting Video in the Field

  8. The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras

    Science.gov (United States)

    Bird, Jo; Colliver, Yeshe; Edwards, Susan

    2014-01-01

    Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

  9. Video astronomy on the go using video cameras with small telescopes

    CERN Document Server

    Ashley, Joseph

    2017-01-01

    Author Joseph Ashley explains video astronomy's many benefits in this comprehensive reference guide for amateurs. Video astronomy offers a wonderful way to see objects in far greater detail than is possible through an eyepiece, and the ability to use the modern, entry-level video camera to image deep space objects is a wonderful development for urban astronomers in particular, as it helps sidestep the issue of light pollution. The author addresses both the positive attributes of these cameras for deep space imaging as well as the limitations, such as amp glow. The equipment needed for imaging as well as how it is configured is identified with hook-up diagrams and photographs. Imaging techniques are discussed together with image processing (stacking and image enhancement). Video astronomy has evolved to offer great results and great ease of use, and both novices and more experienced amateurs can use this book to find the set-up that works best for them. Flexible and portable, they open up a whole new way...

  10. Underwater video enhancement using multi-camera super-resolution

    Science.gov (United States)

    Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.

    2017-12-01

    Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.

  11. Estimation of Rice Crop Quality and Harvest Amount from Helicopter Mounted NIR Camera Data and Remote Sensing Satellite Data

    OpenAIRE

    Kohei Arai; Masanoori Sakashita; Osamu Shigetomi; Yuko Miura

    2015-01-01

    Estimation of rice crop quality and harvest amount in paddy fields with the different rice stump density derived from helicopter mounted NIR camera and remote sensing satellite data is made. Using the intensive study site of rice paddy fields with managing, estimation of protein content in rice crop and nitrogen content in rice leaves through regression analysis with Normalized Difference Vegetation Index: NDVI derived from camera mounted on a radio-control helicopter is made together with ha...

  12. Frequency Identification of Vibration Signals Using Video Camera Image Data

    Directory of Open Access Journals (Sweden)

    Chia-Hung Wu

    2012-10-01

    Full Text Available This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  13. A comparison of camera trap and permanent recording video camera efficiency in wildlife underpasses.

    Science.gov (United States)

    Jumeau, Jonathan; Petrod, Lana; Handrich, Yves

    2017-09-01

    In the current context of biodiversity loss through habitat fragmentation, the effectiveness of wildlife crossings, installed at great expense as compensatory measures, is of vital importance for ecological and socio-economic actors. The evaluation of these structures is directly impacted by the efficiency of monitoring tools (camera traps…), which are used to assess the effectiveness of these crossings by observing the animals that use them. The aim of this study was to quantify the efficiency of camera traps in a wildlife crossing evaluation. Six permanent recording video systems sharing the same field of view as six Reconyx HC600 camera traps installed in three wildlife underpasses were used to assess the exact proportion of missed events (event being the presence of an animal within the field of view), and the error rate concerning underpass crossing behavior (defined as either Entry or Refusal). A sequence of photographs was triggered by either animals (true trigger) or artefacts (false trigger). We quantified the number of false triggers that had actually been caused by animals that were not visible on the images ("false" false triggers). Camera traps failed to record 43.6% of small mammal events (voles, mice, shrews, etc.) and 17% of medium-sized mammal events. The type of crossing behavior (Entry or Refusal) was incorrectly assessed in 40.1% of events, with a higher error rate for entries than for refusals. Among the 3.8% of false triggers, 85% of them were "false" false triggers. This study indicates a global underestimation of the effectiveness of wildlife crossings for small mammals. Means to improve the efficiency are discussed.

  14. QHY (5L-II-M) CCD camera for video meteor observation

    Science.gov (United States)

    Korec, M.

    2015-01-01

    A new digital camera and lens has been tested for video meteor observing. A Tamron M13VG308 lens combined with a QHY 5L-II-M digital camera proved to be the best combination. Test observations have shown this to be superior to the best analog Watec 902H2 Ultimate camera.

  15. Contact freezing observed with a high speed video camera

    Science.gov (United States)

    Hoffmann, Nadine; Koch, Michael; Kiselev, Alexei; Leisner, Thomas

    2017-04-01

    Freezing of supercooled cloud droplets on collision with ice nucleating particle (INP) has been considered as one of the most effective heterogeneous freezing mechanisms. Potentially, it could play an important role in rapid glaciation of a mixed phase cloud especially if coupled with ice multiplication mechanism active at moderate subzero temperatures. The necessary condition for such coupling would be, among others, the presence of very efficient INPs capable of inducing ice nucleation of the supercooled drizzle droplets in the temperature range of -5°C to -20°C. Some mineral dust particles (K-feldspar) and biogenic INPs (pseudomonas bacteria, birch pollen) have been recently identified as such very efficient INPs. However, if observed with a high speed video (HSV) camera, the contact nucleation induced by these two classes of INPs exhibits a very different behavior. Whereas bacterial INPs can induce freezing within a millisecond after initial contact with supercooled water, birch pollen need much more time to initiate freezing. The mineral dust particles seem to induce ice nucleation faster than birch pollen but slower than bacterial INPs. In this contribution we show the HSV records of individual supercooled droplets suspended in an electrodynamic balance and colliding with airborne INPs of various types. The HSV camera is coupled with a long-working-distance microscope, allowing us to observe the contact nucleation of ice at very high spatial and temporal resolution. The average time needed to initiate freezing has been measured depending on the INP species. This time do not necessarily correlate with the contact freezing efficiency of the ice nucleating particles. We discuss possible mechanisms explaining this behavior and potential implications for future ice nucleation research.

  16. Simultaneous monitoring of a collapsing landslide with video cameras

    Directory of Open Access Journals (Sweden)

    K. Fujisawa

    2008-01-01

    Full Text Available Effective countermeasures and risk management to reduce landslide hazards require a full understanding of the processes of collapsing landslides. While the processes are generally estimated from the features of debris deposits after collapse, simultaneous monitoring during collapse provides more insights into the processes. Such monitoring, however, is usually very difficult, because it is rarely possible to predict when a collapse will occur. This study introduces a rare case in which a collapsing landslide (150 m in width and 135 m in height was filmed with three video cameras in Higashi-Yokoyama, Gifu Prefecture, Japan. The cameras were set up in the front and on the right and left sides of the slide in May 2006, one month after a series of small slope failures in the toe and the formation of cracks on the head indicated that a collapse was imminent.

    The filmed images showed that the landslide collapse started from rock falls and slope failures occurring mainly around the margin, that is, the head, sides and toe. These rock falls and slope failures, which were individually counted on the screen, increased with time. Analyzing the images, five of the failures were estimated to have each produced more than 1000 m3 of debris, and the landslide collapsed with several surface failures accompanied by a toppling movement. The manner of the collapse suggested that the slip surface initially remained on the upper slope, and then extended down the slope as the excessive internal stress shifted downwards. Image analysis, together with field measurements using a ground-based laser scanner after the collapse, indicated that the landslide produced a total of 50 000 m3 of debris.

    As described above, simultaneous monitoring provides valuable information about landslide processes. Further development of monitoring techniques will help clarify landslide processes qualitatively as well as quantitatively.

  17. Automatic Level Control for Video Cameras towards HDR Techniques

    Directory of Open Access Journals (Sweden)

    de With PeterHN

    2010-01-01

    Full Text Available We give a comprehensive overview of the complete exposure processing chain for video cameras. For each step of the automatic exposure algorithm we discuss some classical solutions and propose their improvements or give new alternatives. We start by explaining exposure metering methods, describing types of signals that are used as the scene content descriptors as well as means to utilize these descriptors. We also discuss different exposure control types used for the control of lens, integration time of the sensor, and gain control, such as a PID control, precalculated control based on the camera response function, and propose a new recursive control type that matches the underlying image formation model. Then, a description of commonly used serial control strategy for lens, sensor exposure time, and gain is presented, followed by a proposal of a new parallel control solution that integrates well with tone mapping and enhancement part of the image pipeline. Parallel control strategy enables faster and smoother control and facilitates optimally filling the dynamic range of the sensor to improve the SNR and an image contrast, while avoiding signal clipping. This is archived by the proposed special control modes used for better display and correct exposure of both low-dynamic range and high-dynamic range images. To overcome the inherited problems of limited dynamic range of capturing devices we discuss a paradigm of multiple exposure techniques. Using these techniques we can enable a correct rendering of difficult class of high-dynamic range input scenes. However, multiple exposure techniques bring several challenges, especially in the presence of motion and artificial light sources such as fluorescent lights. In particular, false colors and light-flickering problems are described. After briefly discussing some known possible solutions for the motion problem, we focus on solving the fluorescence-light problem. Thereby, we propose an algorithm for

  18. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    Directory of Open Access Journals (Sweden)

    Semi Jeon

    2017-02-01

    Full Text Available Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i robust feature detection using particle keypoints between adjacent frames; (ii camera path estimation and smoothing; and (iii rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV. The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.

  19. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    Science.gov (United States)

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-01-01

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622

  20. Feasibility of an endotracheal tube-mounted camera for percutaneous dilatational tracheostomy.

    Science.gov (United States)

    Grensemann, J; Eichler, L; Hopf, S; Jarczak, D; Simon, M; Kluge, S

    2017-07-01

    Percutaneous dilatational tracheostomy (PDT) in critically ill patients is often led by optical guidance with a bronchoscope. This is not without its disadvantages. Therefore, we aimed to study the feasibility of a recently introduced endotracheal tube-mounted camera (VivaSight™-SL, ET View, Misgav, Israel) in the guidance of PDT. We studied 10 critically ill patients who received PDT with a VivaSight-SL tube that was inserted prior to tracheostomy for optical guidance. Visualization of the tracheal structures (i.e., identification and monitoring of the thyroid, cricoid, and tracheal cartilage and the posterior wall) and the quality of ventilation (before puncture and during the tracheostomy) were rated on four-point Likert scales. Respiratory variables were recorded, and blood gases were sampled before the interventions, before the puncture and before the insertion of the tracheal cannula. Visualization of the tracheal landmarks was rated as 'very good' or 'good' in all but one case. Monitoring during the puncture and dilatation was also rated as 'very good' or 'good' in all but one. In the cases that were rated 'difficult', the visualization and monitoring of the posterior wall of the trachea were the main concerns. No changes in the respiratory variables or blood gases occurred between the puncture and the insertion of the tracheal cannula. Percutaneous dilatational tracheostomy with optical guidance from a tube-mounted camera is feasible. Further studies comparing the camera tube with bronchoscopy as the standard approach should be performed. © 2017 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  1. Development of a 3D Flash LADAR Video Camera for Entry, Decent and Landing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced Scientific Concepts, Inc. (ASC) has developed a 128 x 128 frame, 3D Flash LADAR video camera capable of a 30 Hz frame rate. Because Flash LADAR captures an...

  2. Development of a 3D Flash LADAR Video Camera for Entry, Decent, and Landing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced Scientific Concepts, Inc. (ASC) has developed a 128 x 128 frame, 3D Flash LADAR video camera which produces 3-D point clouds at 30 Hz. Flash LADAR captures...

  3. Feasibility of Using Video Camera for Automated Enforcement on Red-Light Running and Managed Lanes.

    Science.gov (United States)

    2009-12-25

    The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and HOV occupancy requirement using video cameras in Nevada. This objective was a...

  4. AR Supporting System for Pool Games Using a Camera-Mounted Handheld Display

    Directory of Open Access Journals (Sweden)

    Hideaki Uchiyama

    2008-01-01

    Full Text Available This paper presents a pool supporting system with a camera-mounted handheld display based on augmented reality technology. By using our system, users can get supporting information when they once capture a pool table. They can also watch visual aids through the display while they are capturing the table. First, our system estimates ball positions on the table with one image taken from an arbitrary viewpoint. Next, our system provides several shooting ways considering the next shooting way. Finally, our system presents visual aids such as shooting direction and ball behavior. Main purpose of our system is to estimate and analyze the distribution of balls and to present visual aids. Our system is implemented without special equipment such as a magnetic sensor or artificial markers. For evaluating our system, the accuracy of ball positions and the effectiveness of our supporting information are presented

  5. Using a Video Camera to Measure the Radius of the Earth

    Science.gov (United States)

    Carroll, Joshua; Hughes, Stephen

    2013-01-01

    A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…

  6. Performance evaluation of a two detector camera for real-time video

    NARCIS (Netherlands)

    Lochocki, Benjamin; Gambín-regadera, Adrián; Artal, Pablo

    2016-01-01

    Single pixel imaging can be the preferred method over traditional 2D-array imaging in spectral ranges where conventional cameras are not available. However, when it comes to real-time video imaging, single pixel imaging cannot compete with the framerates of conventional cameras, especially when

  7. Still-Video Photography: Tomorrow's Electronic Cameras in the Hands of Today's Photojournalists.

    Science.gov (United States)

    Foss, Kurt; Kahan, Robert S.

    This paper examines the still-video camera and its potential impact by looking at recent experiments and by gathering information from some of the few people knowledgeable about the new technology. The paper briefly traces the evolution of the tools and processes of still-video photography, examining how photographers and their work have been…

  8. Digital video technology and production 101: lights, camera, action.

    Science.gov (United States)

    Elliot, Diane L; Goldberg, Linn; Goldberg, Michael J

    2014-01-01

    Videos are powerful tools for enhancing the reach and effectiveness of health promotion programs. They can be used for program promotion and recruitment, for training program implementation staff/volunteers, and as elements of an intervention. Although certain brief videos may be produced without technical assistance, others often require collaboration and contracting with professional videographers. To get practitioners started and to facilitate interactions with professional videographers, this Tool includes a guide to the jargon of video production and suggestions for how to integrate videos into health education and promotion work. For each type of video, production principles and issues to consider when working with a professional videographer are provided. The Tool also includes links to examples in each category of video applications to health promotion.

  9. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    Science.gov (United States)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  10. Use of body-mounted cameras to enhance data collection: an evaluation of two arthropod sampling techniques

    Science.gov (United States)

    A study was conducted that compared the effectiveness of a sweepnet versus a vacuum suction device for collecting arthropods in cotton. The study differs from previous research in that body-mounted action cameras (B-MACs) were used to record the activity of the person conducting the collections. The...

  11. A unified framework for capturing facial images in video surveillance systems using cooperative camera system

    Science.gov (United States)

    Chan, Fai; Moon, Yiu-Sang; Chen, Jiansheng; Ma, Yiu-Kwan; Tsang, Wai-Hung; Fu, Kah-Kuen

    2008-04-01

    Low resolution and un-sharp facial images are always captured from surveillance videos because of long human-camera distance and human movements. Previous works addressed this problem by using an active camera to capture close-up facial images without considering human movements and mechanical delays of the active camera. In this paper, we proposed a unified framework to capture facial images in video surveillance systems by using one static and active camera in a cooperative manner. Human faces are first located by a skin-color based real-time face detection algorithm. A stereo camera model is also employed to approximate human face location and his/her velocity with respect to the active camera. Given the mechanical delays of the active camera, the position of a target face with a given delay can be estimated using a Human-Camera Synchronization Model. By controlling the active camera with corresponding amount of pan, tilt, and zoom, a clear close-up facial image of a moving human can be captured then. We built the proposed system in an 8.4-meter indoor corridor. Results show that the proposed stereo camera configuration can locate faces with average error of 3%. In addition, it is capable of capturing facial images of a walking human clearly in first instance in 90% of the test cases.

  12. A Video Camera Road Sign System of the Early Warning from Collision with the Wild Animals

    Directory of Open Access Journals (Sweden)

    Matuska Slavomir

    2016-05-01

    Full Text Available This paper proposes a camera road sign system of the early warning, which can help to avoid from vehicle collision with the wild animals. The system consists of camera modules placed down the particularly chosen route and the intelligent road signs. The camera module consists of the camera device and the computing unit. The video stream is captured from video camera using computing unit. Then the algorithms of object detection are deployed. Afterwards, the machine learning algorithms will be used to classify the moving objects. If the moving object is classified as animal and this animal can be dangerous for safety of the vehicle, warning will be displayed on the intelligent road sings.

  13. video115_0403 -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  14. video114_0402c -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  15. video114_0402b -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  16. Modeling camera orientation and 3D structure from a sequence of images taken by a perambulating commercial video camera

    Science.gov (United States)

    M-Rouhani, Behrouz; Anderson, James A. D. W.

    1997-04-01

    In this paper we report the degree of reliability of image sequences taken by off-the-shelf TV cameras for modeling camera rotation and reconstructing 3D structure using computer vision techniques. This is done in spite of the fact that computer vision systems usually use imaging devices that are specifically designed for the human vision. Our scenario consists of a static scene and a mobile camera moving through the scene. The scene is any long axial building dominated by features along the three principal orientations and with at least one wall containing prominent repetitive planar features such as doors, windows bricks etc. The camera is an ordinary commercial camcorder moving along the axial axis of the scene and is allowed to rotate freely within the range +/- 10 degrees in all directions. This makes it possible that the camera be held by a walking unprofessional cameraman with normal gait, or to be mounted on a mobile robot. The system has been tested successfully on sequence of images of a variety of structured, but fairly cluttered scenes taken by different walking cameramen. The potential application areas of the system include medicine, robotics and photogrammetry.

  17. Quality Analysis of Massive High-Definition Video Streaming in Two-Tiered Embedded Camera-Sensing Systems

    OpenAIRE

    Joongheon Kim; Eun-Seok Ryu

    2014-01-01

    This paper presents the quality analysis results of high-definition video streaming in two-tiered camera sensor network applications. In the camera-sensing system, multiple cameras sense visual scenes in their target fields and transmit the video streams via IEEE 802.15.3c multigigabit wireless links. However, the wireless transmission introduces interferences to the other links. This paper analyzes the capacity degradation due to the interference impacts from the camera-sensing nodes to the ...

  18. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    Science.gov (United States)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  19. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i generation of a three-dimensional (3D human model; (ii human object-based automatic scene calibration; and (iii metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  20. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    Science.gov (United States)

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  1. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  2. Camera Networks The Acquisition and Analysis of Videos over Wide Areas

    CERN Document Server

    Roy-Chowdhury, Amit K

    2012-01-01

    As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide

  3. GoPro Hero Cameras for Creation of a Three-Dimensional, Educational, Neurointerventional Video.

    Science.gov (United States)

    Park, Min S; Brock, Andrea; Mortimer, Vance; Taussky, Philipp; Couldwell, William T; Quigley, Edward

    2017-10-01

    Neurointerventional education relies on an apprenticeship model, with the trainee observing and participating in procedures with the guidance of a mentor. While educational videos are becoming prevalent in surgical cases, there is a dearth of comparable educational material for trainees in neurointerventional programs. We sought to create a high-quality, three-dimensional video of a routine diagnostic cerebral angiogram for use as an educational tool. A diagnostic cerebral angiogram was recorded using two GoPro HERO 3+ cameras with the Dual HERO System to capture the proceduralist's hands during the case. This video was edited with recordings from the video monitors to create a real-time three-dimensional video of both the actions of the neurointerventionalist and the resulting wire/catheter movements. The final edited video, in either two or three dimensions, can serve as another instructional tool for the training of residents and/or fellows. Additional videos can be created in a similar fashion of more complicated neurointerventional cases. The GoPro HERO 3+ camera and Dual HERO System can be used to create educational videos of neurointerventional procedures.

  4. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    Science.gov (United States)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  5. Online coupled camera pose estimation and dense reconstruction from video

    Science.gov (United States)

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  6. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    Science.gov (United States)

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  7. Optimization of radiation sensors for a passive terahertz video camera for security applications

    NARCIS (Netherlands)

    Zieger, G.J.M.

    2014-01-01

    A passive terahertz video camera allows for fast security screenings from distances of several meters. It avoids irradiation or the impressions of nakedness, which oftentimes cause embarrassment and trepidation of the concerned persons. This work describes the optimization of highly sensitive

  8. Video content analysis on body-worn cameras for retrospective investigation

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; ter Haar, Frank B.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Burghouts, Gertjan J.; Wijn, Remco; van den Broek, Sebastiaan P.; van Rest, Jeroen H. C.

    2015-10-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications and effects, such as the reduction of violence between police and citizens. However, the increased use of bodycams also creates potential challenges. For example: how can end-users extract information from the abundance of video, how can the information be presented, and how can an officer retrieve information efficiently? Nevertheless, such video gives the opportunity to stimulate the professionals' memory, and support complete and accurate reporting. In this paper, we show how video content analysis (VCA) can address these challenges and seize these opportunities. To this end, we focus on methods for creating a complete summary of the video, which allows quick retrieval of relevant fragments. The content analysis for summarization consists of several components, such as stabilization, scene selection, motion estimation, localization, pedestrian tracking and action recognition in the video from a bodycam. The different components and visual representations of summaries are presented for retrospective investigation.

  9. Localization of cask and plug remote handling system in ITER using multiple video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, João, E-mail: jftferreira@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2013-10-15

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.

  10. The effects of camera jitter for background subtraction algorithms on fused infrared-visible video streams

    Science.gov (United States)

    Becker, Stefan; Scherer-Negenborn, Norbert; Thakkar, Pooja; Hübner, Wolfgang; Arens, Michael

    2016-10-01

    This paper is a continuation of the work of Becker et al.1 In their work, they analyzed the robustness of various background subtraction algorithms on fused video streams originating from visible and infrared cameras. In order to cover a broader range of background subtraction applications, we show the effects of fusing infrared-visible video streams from vibrating cameras on a large set of background subtraction algorithms. The effectiveness is quantitatively analyzed on recorded data of a typical outdoor sequence with a fine-grained and accurate annotation of the images. Thereby, we identify approaches which can benefit from fused sensor signals with camera jitter. Finally conclusions on what fusion strategies should be preferred under such conditions are given.

  11. Hardware-based smart camera for recovering high dynamic range video from multiple exposures

    Science.gov (United States)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2014-10-01

    In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.

  12. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  13. A novel method to reduce time investment when processing videos from camera trap studies.

    Directory of Open Access Journals (Sweden)

    Kristijn R R Swinnen

    Full Text Available Camera traps have proven very useful in ecological, conservation and behavioral research. Camera traps non-invasively record presence and behavior of animals in their natural environment. Since the introduction of digital cameras, large amounts of data can be stored. Unfortunately, processing protocols did not evolve as fast as the technical capabilities of the cameras. We used camera traps to record videos of Eurasian beavers (Castor fiber. However, a large number of recordings did not contain the target species, but instead empty recordings or other species (together non-target recordings, making the removal of these recordings unacceptably time consuming. In this paper we propose a method to partially eliminate non-target recordings without having to watch the recordings, in order to reduce workload. Discrimination between recordings of target species and non-target recordings was based on detecting variation (changes in pixel values from frame to frame in the recordings. Because of the size of the target species, we supposed that recordings with the target species contain on average much more movements than non-target recordings. Two different filter methods were tested and compared. We show that a partial discrimination can be made between target and non-target recordings based on variation in pixel values and that environmental conditions and filter methods influence the amount of non-target recordings that can be identified and discarded. By allowing a loss of 5% to 20% of recordings containing the target species, in ideal circumstances, 53% to 76% of non-target recordings can be identified and discarded. We conclude that adding an extra processing step in the camera trap protocol can result in large time savings. Since we are convinced that the use of camera traps will become increasingly important in the future, this filter method can benefit many researchers, using it in different contexts across the globe, on both videos and

  14. A digital underwater video camera system for aquatic research in regulated rivers

    Science.gov (United States)

    Martin, Benjamin M.; Irwin, Elise R.

    2010-01-01

    We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

  15. Video camera system for locating bullet holes in targets at a ballistics tunnel

    Science.gov (United States)

    Burner, A. W.; Rummler, D. R.; Goad, W. K.

    1990-01-01

    A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

  16. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    Energy Technology Data Exchange (ETDEWEB)

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  17. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  18. DESIGN OF CAMERA MOUNT AND ITS APPLICATION FOR MONITORING MACHINING PROCESS

    Directory of Open Access Journals (Sweden)

    Nadežda Čuboňová

    2015-05-01

    Full Text Available The article deals with the solution to the problem of holding a scanning device – GoPro camera in the vicinity of milling machine EMCO Concept MILL 105, practical part solves the design and production of the fixture. The proposal of the fixture includes the best placing of the fixture within the milling area. On this basis individual variants of this solution are elaborated. The best variant for holding of the camera was selected and fixture production was experimentally performed on a 3D printer – Easy 3D Maker. Fixture functionality was verified on the milling machine.

  19. Design, analysis, and testing of kinematic mount for astronomical observation instrument used in space camera

    Science.gov (United States)

    An, Mingxin; Zhang, Lihao; Xu, Shuyan; Dong, Jihong

    2016-11-01

    A statically determinate kinematic mount structure is designed for an astronomical observation instrument. The basic principle of the proposed kinematic mount is introduced in detail, including the design principle, its structure, and its degrees of freedom. The compliance equations for the single-axis right circle flexure hinge are deduced, and mathematical models of the compliances of the bipod in the X-axis and Z-axis directions are established. Based on the index requirements, the range of one design parameter (the hinge groove depth, R) for the kinematic mount is determined. Parametric design is performed, with the entire structure being the design object and the first three eigenfrequencies as the design objective; the final design parameter for the kinematic mount is 1.9 mm. The first three eigenfrequencies of the final structure are 36.49 Hz, 38.65 Hz, and 72.41 Hz, which meet the frequency requirements. The Z-direction deformation and the bipod compliances in the X-axis and Z-axis directions are analyzed through simulations and experiments. The results show that (1) the Z-direction deformation of the bipod meets the displacement requirement; (2) the deviations between the finite element results and the compliance equation Cx results, and between the finite element results and the compliance equation Cz results are 8.8% and 3.92%, respectively; (3) the deviation between the experimental results and the compliance equation Cz results is 10.3%. It is concluded that the bipod compliance equations in the X-axis and Z-axis directions are valid, and that the kinematic mount thus meets the design requirements.

  20. Design, analysis, and testing of kinematic mount for astronomical observation instrument used in space camera.

    Science.gov (United States)

    An, Mingxin; Zhang, Lihao; Xu, Shuyan; Dong, Jihong

    2016-11-01

    A statically determinate kinematic mount structure is designed for an astronomical observation instrument. The basic principle of the proposed kinematic mount is introduced in detail, including the design principle, its structure, and its degrees of freedom. The compliance equations for the single-axis right circle flexure hinge are deduced, and mathematical models of the compliances of the bipod in the X-axis and Z-axis directions are established. Based on the index requirements, the range of one design parameter (the hinge groove depth, R) for the kinematic mount is determined. Parametric design is performed, with the entire structure being the design object and the first three eigenfrequencies as the design objective; the final design parameter for the kinematic mount is 1.9 mm. The first three eigenfrequencies of the final structure are 36.49 Hz, 38.65 Hz, and 72.41 Hz, which meet the frequency requirements. The Z-direction deformation and the bipod compliances in the X-axis and Z-axis directions are analyzed through simulations and experiments. The results show that (1) the Z-direction deformation of the bipod meets the displacement requirement; (2) the deviations between the finite element results and the compliance equation Cx results, and between the finite element results and the compliance equation Cz results are 8.8% and 3.92%, respectively; (3) the deviation between the experimental results and the compliance equation Cz results is 10.3%. It is concluded that the bipod compliance equations in the X-axis and Z-axis directions are valid, and that the kinematic mount thus meets the design requirements.

  1. Object Tracking in Frame-Skipping Video Acquired Using Wireless Consumer Cameras

    Directory of Open Access Journals (Sweden)

    Anlong Ming

    2012-10-01

    Full Text Available Object tracking is an important and fundamental task in computer vision and its high-level applications, e.g., intelligent surveillance, motion-based recognition, video indexing, traffic monitoring and vehicle navigation. However, the recent widespread use of wireless consumer cameras often produces low quality videos with frame-skipping and this makes object tracking difficult. Previous tracking methods, for example, generally depend heavily on object appearance or motion continuity and cannot be directly applied to frame-skipping videos. In this paper, we propose an improved particle filter for object tracking to overcome the frame-skipping difficulties. The novelty of our particle filter lies in using the detection result of erratic motion to ameliorate the transition model for a better trial distribution. Experimental results show that the proposed approach improves the tracking accuracy in comparison with the state-of-the-art methods, even when both the object and the consumer are in motion.

  2. Monitoring the body temperature of cows and calves using video recordings from an infrared thermography camera.

    Science.gov (United States)

    Hoffmann, Gundula; Schmidt, Mariana; Ammon, Christian; Rose-Meierhöfer, Sandra; Burfeind, Onno; Heuwieser, Wolfgang; Berg, Werner

    2013-06-01

    The aim of this study was to assess the variability of temperatures measured by a video-based infrared camera (IRC) in comparison to rectal and vaginal temperatures. The body surface temperatures of cows and calves were measured contactless at different body regions using videos from the IRC. Altogether, 22 cows and 9 calves were examined. The differences of the measured IRC temperatures among the body regions, i.e. eye (mean: 37.0 °C), back of the ear (35.6 °C), shoulder (34.9 °C) and vulva (37.2 °C), were significant (P infrared thermography videos has the advantage to analyze more than 1 picture per animal in a short period of time, and shows potential as a monitoring system for body temperatures in cattle.

  3. Bird-borne video-cameras show that seabird movement patterns relate to previously unrevealed proximate environment, not prey

    National Research Council Canada - National Science Library

    Tremblay, Yann; Thiebault, Andréa; Mullers, Ralf; Pistorius, Pierre

    2014-01-01

    ... environmental contexts. Here, using miniaturized video cameras and GPS tracking recorders simultaneously, we show for the first time that information on the immediate visual surroundings of a foraging seabird, the Cape...

  4. Ultrahigh-definition color video camera system with 4K-scanning lines

    Science.gov (United States)

    Mitani, Kohji; Sugawara, Masayuki; Shimamoto, Hiroshi; Yamashita, Takayuki; Okano, Fumio

    2003-05-01

    An experimental ultrahigh-definition color video camera system with 7680(H) × 4320(V) pixels has been developed using four 8-million-pixel CCDs. The 8-million-pixel CCD with a progressive scanning rate of 60 frames per second has 4046(H) × 2048(V) effective imaging pixels, each of which is 8.4 micron2. We applied the four-imager pickup method to increase the camera"s resolution. This involves attaching four CCDs to a special color-separation prism. Two CCDs are used for the green image, and the other two are used for red and blue. The spatial image sampling pattern of these CCDs to the optical image is equivalent to one with 32 million pixels in the Bayer pattern color filter. The prototype camera attains a limiting resolution of more than 2700 TV lines both horizontally and vertically, which is higher than that of an 8-million-CCD. The sensitivity of the camera is 2000 lux, F 2.8 at approx. 50 dB of dark-noise level on the HDTV format. Its other specifications are a dynamic range of 200%, a power consumption of about 600 W and a weight, with lens, of 76 kg.

  5. Utilization of an video camera in study of the goshawk (Accipiter gentilis diet

    Directory of Open Access Journals (Sweden)

    Martin Tomešek

    2011-01-01

    Full Text Available In 2009, research was carried out into the food spectrum of goshawk (Accipiter gentilis by means of automatic digital video cameras with a recoding device in the area of the Chřiby Upland. The monitoring took place at two localities in the vicinity of the village of Buchlovice at the southeastern edge of the Chřiby Upland in a period from hatching the chicks to their flying out from a nest. The unambiguous advantage of using the camera systems at the study of food spectrum is a possibility of the exact determination of brought preys in the majority of cases. As much as possible economic and effective technology prepared according to given conditions was used. Results of using automatic digital video cameras with a recoding device consist in a number of valuable data, which clarify the food spectrum of a given species. The main output of the whole project is determination of the food spectrum of goshawk (Accipiter gentilis from two localities, which showed the following composition: 89 % birds, 9.5 % mammals and 1.5 % other animals or unidentifiable components of food. Birds of the genus Turdus were the most frequent prey in both cases of monitoring. As for mammals, Sciurus vulgaris was most frequent.

  6. High-sensitive thermal video camera with self-scanned 128 InSb linear array

    Science.gov (United States)

    Fujisada, Hiroyuki

    1991-12-01

    A compact thermal video camera with very high sensitivity has been developed by using a self-scanned 128 InSb linear array photodiode. Two-dimensional images are formed by a self- scanning function of the linear array focal plane assembly in the horizontal direction and by a vibration mirror in the vertical direction. Images with 128 X 128 pixel number are obtained every 1/30 seconds. A small size InSb detector array with a total length of 7.68 mm is utilized in order to build the compact system. In addition, special consideration is given to a configuration of optics, vibration mirror, and focal plane assembly. Real-time signal processing by a microprocessor is carried out to compensate inhomogeneous sensitivities and irradiances for each detector. The standard NTSC TV format is employed for output video signals. The thermal video camera developed had a very high radiometric sensitivity. Minimum resolvable temperature difference (MRTD) is estimated at about 0.02 K for 300 K target. The stable operation is possible without blackbody reference, because of very small stray radiation.

  7. Research on the use and problems of digital video camera from the perspective of schools primary teacher of Granada province

    Directory of Open Access Journals (Sweden)

    Pablo José García Sempere

    2012-12-01

    Full Text Available The adoption of ICT in society and specifically in schools is changing the relationships and traditional means of teaching. These new situations require teachers to assume new roles and responsibilities, thereby creating new demands for training. The teaching body concurs that "teachers require both and initial and ongoing training in the use of digital video cameras and video editing." This article presents the main results of research that focused on the applications of digital video camera for teachers of primary education schools in the province of Granada, Spain.

  8. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    Science.gov (United States)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  9. VideoWeb Dataset for Multi-camera Activities and Non-verbal Communication

    Science.gov (United States)

    Denina, Giovanni; Bhanu, Bir; Nguyen, Hoang Thanh; Ding, Chong; Kamal, Ahmed; Ravishankar, Chinya; Roy-Chowdhury, Amit; Ivers, Allen; Varda, Brenda

    Human-activity recognition is one of the most challenging problems in computer vision. Researchers from around the world have tried to solve this problem and have come a long way in recognizing simple motions and atomic activities. As the computer vision community heads toward fully recognizing human activities, a challenging and labeled dataset is needed. To respond to that need, we collected a dataset of realistic scenarios in a multi-camera network environment (VideoWeb) involving multiple persons performing dozens of different repetitive and non-repetitive activities. This chapter describes the details of the dataset. We believe that this VideoWeb Activities dataset is unique and it is one of the most challenging datasets available today. The dataset is publicly available online at http://vwdata.ee.ucr.edu/ along with the data annotation.

  10. Video-based eyetracking methods and algorithms in head-mounted displays

    Science.gov (United States)

    Hua, Hong; Krishnaswamy, Prasanna; Rolland, Jannick P.

    2006-05-01

    Head pose is utilized to approximate a user’s line-of-sight for real-time image rendering and interaction in most of the 3D visualization applications using head-mounted displays (HMD). The eye often reaches an object of interest before the completion of most head movements. It is highly desirable to integrate eye-tracking capability into HMDs in various applications. While the added complexity of an eyetracked-HMD (ET-HMD) imposes challenges on designing a compact, portable, and robust system, the integration offers opportunities to improve eye tracking accuracy and robustness. In this paper, based on the modeling of an eye imaging and tracking system, we examine the challenges and identify parametric requirements for video-based pupil-glint tracking methods in an ET-HMD design, and predict how these parameters may affect the tracking accuracy, resolution, and robustness. We further present novel methods and associated algorithms that effectively improve eye-tracking accuracy and extend the tracking range.

  11. BOREAS RSS-03 Imagery and Snapshots from a Helicopter-Mounted Video Camera

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set provides images of boreal forests in central Canada collected over numerous tower and auxiliary sites during the BOREAS Intensive Field Campaigns...

  12. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    Energy Technology Data Exchange (ETDEWEB)

    Pardini, A.F.

    1998-01-27

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing.

  13. Visual fatigue modeling for stereoscopic video shot based on camera motion

    Science.gov (United States)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  14. vid116_0501n -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  15. vid116_0501c -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  16. vid116_0501s -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  17. vid116_0501d -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  18. Using drone-mounted cameras for on-site body documentation: 3D mapping and active survey.

    Science.gov (United States)

    Urbanová, Petra; Jurda, Mikoláš; Vojtíšek, Tomáš; Krajsa, Jan

    2017-12-01

    Recent advances in unmanned aerial technology have substantially lowered the cost associated with aerial imagery. As a result, forensic practitioners are today presented with easy low-cost access to aerial photographs at remote locations. The present paper aims to explore boundaries in which the low-end drone technology can operate as professional crime scene equipment, and to test the prospects of aerial 3D modeling in the forensic context. The study was based on recent forensic cases of falls from height admitted for postmortem examinations. Three mock outdoor forensic scenes featuring a dummy, skeletal remains and artificial blood were constructed at an abandoned quarry and subsequently documented using a commercial DJI Phantom 2 drone equipped with a GoPro HERO 4 digital camera. In two of the experiments, the purpose was to conduct aerial and ground-view photography and to process the acquired images with a photogrammetry protocol (using Agisoft PhotoScan® 1.2.6) in order to generate 3D textured models. The third experiment tested the employment of drone-based video recordings in mapping scattered body parts. The results show that drone-based aerial photography is capable of producing high-quality images, which are appropriate for building accurate large-scale 3D models of a forensic scene. If, however, high-resolution top-down three-dimensional scene documentation featuring details on a corpse or other physical evidence is required, we recommend building a multi-resolution model by processing aerial and ground-view imagery separately. The video survey showed that using an overview recording for seeking out scattered body parts was efficient. In contrast, the less easy-to-spot evidence, such as bloodstains, was detected only after having been marked properly with crime scene equipment. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Quantitative underwater 3D motion analysis using submerged video cameras: accuracy analysis and trajectory reconstruction.

    Science.gov (United States)

    Silvatti, Amanda P; Cerveri, Pietro; Telles, Thiago; Dias, Fábio A S; Baroni, Guido; Barros, Ricardo M L

    2013-01-01

    In this study we aim at investigating the applicability of underwater 3D motion capture based on submerged video cameras in terms of 3D accuracy analysis and trajectory reconstruction. Static points with classical direct linear transform (DLT) solution, a moving wand with bundle adjustment and a moving 2D plate with Zhang's method were considered for camera calibration. As an example of the final application, we reconstructed the hand motion trajectories in different swimming styles and qualitatively compared this with Maglischo's model. Four highly trained male swimmers performed butterfly, breaststroke and freestyle tasks. The middle fingertip trajectories of both hands in the underwater phase were considered. The accuracy (mean absolute error) of the two calibration approaches (wand: 0.96 mm - 2D plate: 0.73 mm) was comparable to out of water results and highly superior to the classical DLT results (9.74 mm). Among all the swimmers, the hands' trajectories of the expert swimmer in the style were almost symmetric and in good agreement with Maglischo's model. The kinematic results highlight symmetry or asymmetry between the two hand sides, intra- and inter-subject variability in terms of the motion patterns and agreement or disagreement with the model. The two outcomes, calibration results and trajectory reconstruction, both move towards the quantitative 3D underwater motion analysis.

  20. Design and Optimization of the VideoWeb Wireless Camera Network

    Directory of Open Access Journals (Sweden)

    Nguyen HoangThanh

    2010-01-01

    Full Text Available Sensor networks have been a very active area of research in recent years. However, most of the sensors used in the development of these networks have been local and nonimaging sensors such as acoustics, seismic, vibration, temperature, humidity. The emerging development of video sensor networks poses its own set of unique challenges, including high-bandwidth and low latency requirements for real-time processing and control. This paper presents a systematic approach by detailing the design, implementation, and evaluation of a large-scale wireless camera network, suitable for a variety of practical real-time applications. We take into consideration issues related to hardware, software, control, architecture, network connectivity, performance evaluation, and data-processing strategies for the network. We also perform multiobjective optimization on settings such as video resolution and compression quality to provide insight into the performance trade-offs when configuring such a network and present lessons learned in the building and daily usage of the network.

  1. Design of video surveillance and tracking system based on attitude and heading reference system and PTZ camera

    Science.gov (United States)

    Yang, Jian; Xie, Xiaofang; Wang, Yan

    2017-04-01

    Based on the AHRS (Attitude and Heading Reference System) and PTZ (Pan/Tilt/Zoom) camera, we designed a video monitoring and tracking system. The overall structure of the system and the software design are given. The key technologies such as serial port communication and head attitude tracking are introduced, and the codes of the key part are given.

  2. Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video content analysis tasks in large-scale ad-hoc networks

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.

    2017-10-01

    Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.

  3. Representing videos in tangible products

    Science.gov (United States)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  4. Estimation of Protein Content in Rice Crop and Nitrogen Content in Rice Leaves Through Regression Analysis with NDVI Derived from Camera Mounted Radio-Control Helicopter

    OpenAIRE

    Kohei Arai; Masanori Sakashita; Osamu Shigetomi; Yuko Miura

    2014-01-01

    Estimation of protein content in rice crop and nitrogen content in rice leaves through regression analysis with Normalized Difference Vegetation Index: NDVI derived from camera mounted radio-control helicopter is proposed. Through experiments at rice paddy fields which is situated at Saga Prefectural Research Institute of Agriculture: SPRIA in Saga city, Japan, it is found that protein content in rice crops is highly correlated with NDVI which is acquired with visible and Near Infrared: NIR c...

  5. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    Science.gov (United States)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  6. The Use of Surveillance Cameras for the Rapid Mapping of Lava Flows: An Application to Mount Etna Volcano

    Directory of Open Access Journals (Sweden)

    Mauro Coltelli

    2017-02-01

    Full Text Available In order to improve the observation capability in one of the most active volcanic areas in the world, Mt. Etna, we developed a processing method to use the surveillance cameras for a quasi real-time mapping of syn-eruptive processes. Following an evaluation of the current performance of the Etna permanent ground NEtwork of Thermal and Visible Sensors (Etna_NETVIS, its possible implementation and optimization was investigated to determine the locations of additional observation sites to be rapidly set up during emergencies. A tool was then devised to process time series of ground-acquired images and extract a coherent multi-temporal dataset of georeferenced map. The processed datasets can be used to extract 2D features such as evolution maps of active lava flows. The tool was validated on ad-hoc test fields and then adopted to map the evolution of two recent lava flows. The achievable accuracy (about three times the original pixel size and the short processing time makes the tool suitable for rapidly assessing lava flow evolutions, especially in the case of recurrent eruptions, such as those of the 2011–2015 Etna activity. The tool can be used both in standard monitoring activities and during emergency phases (eventually improving the present network with additional mobile stations when it is mandatory to carry out a quasi-real-time mapping to support civil protection actions. The developed tool could be integrated in the control room of the Osservatorio Etneo, thus enabling the Etna_NETVIS for mapping purposes and not only for video surveillance.

  7. USING A DIGITAL VIDEO CAMERA AS THE SMART SENSOR OF THE SYSTEM FOR AUTOMATIC PROCESS CONTROL OF GRANULAR FODDER MOLDING

    Directory of Open Access Journals (Sweden)

    M. M. Blagoveshchenskaya

    2014-01-01

    Full Text Available Summary. The most important operation of granular mixed fodder production is molding process. Properties of granular mixed fodder are defined during this process. They determine the process of production and final product quality. The possibility of digital video camera usage as intellectual sensor for control system in process of production is analyzed in the article. The developed parametric model of the process of bundles molding from granular fodder mass is presented in the paper. Dynamic characteristics of the molding process were determined. A mathematical model of motion of bundle of granular fodder mass after matrix holes was developed. The developed mathematical model of the automatic control system (ACS with the use of etalon video frame as the set point in the MATLAB software environment was shown. As a parameter of the bundles molding process it is proposed to use the value of the specific area defined in the mathematical treatment of the video frame. The algorithms of the programs to determine the changes in structural and mechanical properties of the feed mass in video frames images were developed. Digital video shooting of various modes of the molding machine was carried out and after the mathematical processing of video the transfer functions for use as a change of adjustable parameters of the specific area were determined. Structural and functional diagrams of the system of regulation of the food bundles molding process with the use of digital camcorders were built and analyzed. Based on the solution of the equations of fluid dynamics mathematical model of bundle motion after leaving the hole matrix was obtained. In addition to its viscosity, creep property was considered that is characteristic of the feed mass. The mathematical model ACS of the bundles molding process allowing to investigate transient processes which occur in the control system that uses a digital video camera as the smart sensor was developed in Simulink

  8. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  9. Video content analysis on body-worn cameras for retrospective investigation

    NARCIS (Netherlands)

    Bouma, H.; Baan, J.; Haar, F.B. ter; Eendebak, P.T.; Hollander, R.J.M. den; Burghouts, G.J.; Wijn, R.; Broek, S.P. van den; Rest, J.H.C. van

    2015-01-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications

  10. HDR {sup 192}Ir source speed measurements using a high speed video camera

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca, Gabriel P. [Instituto de Pesquisas Energéticas e Nucleares—IPEN-CNEN/SP, São Paulo 05508-000, Brazil and Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Viana, Rodrigo S. S.; Yoriyaz, Hélio [Instituto de Pesquisas Energéticas e Nucleares—IPEN-CNEN/SP, São Paulo 05508-000 (Brazil); Podesta, Mark [Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Rubo, Rodrigo A.; Sales, Camila P. de [Hospital das Clínicas da Universidade de São Paulo—HC/FMUSP, São Paulo 05508-000 (Brazil); Reniers, Brigitte [Department of Radiation Oncology - MAASTRO, GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Research Group NuTeC, CMK, Hasselt University, Agoralaan Gebouw H, Diepenbeek B-3590 (Belgium); Verhaegen, Frank, E-mail: frank.verhaegen@maastro.nl [Department of Radiation Oncology - MAASTRO, GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Medical Physics Unit, Department of Oncology, McGill University, Montréal, Québec H3G 1A4 (Canada)

    2015-01-15

    Purpose: The dose delivered with a HDR {sup 192}Ir afterloader can be separated into a dwell component, and a transit component resulting from the source movement. The transit component is directly dependent on the source speed profile and it is the goal of this study to measure accurate source speed profiles. Methods: A high speed video camera was used to record the movement of a {sup 192}Ir source (Nucletron, an Elekta company, Stockholm, Sweden) for interdwell distances of 0.25–5 cm with dwell times of 0.1, 1, and 2 s. Transit dose distributions were calculated using a Monte Carlo code simulating the source movement. Results: The source stops at each dwell position oscillating around the desired position for a duration up to (0.026 ± 0.005) s. The source speed profile shows variations between 0 and 81 cm/s with average speed of ∼33 cm/s for most of the interdwell distances. The source stops for up to (0.005 ± 0.001) s at nonprogrammed positions in between two programmed dwell positions. The dwell time correction applied by the manufacturer compensates the transit dose between the dwell positions leading to a maximum overdose of 41 mGy for the considered cases and assuming an air-kerma strength of 48 000 U. The transit dose component is not uniformly distributed leading to over and underdoses, which is within 1.4% for commonly prescribed doses (3–10 Gy). Conclusions: The source maintains its speed even for the short interdwell distances. Dose variations due to the transit dose component are much lower than the prescribed treatment doses for brachytherapy, although transit dose component should be evaluated individually for clinical cases.

  11. Bird-Borne Video-Cameras Show That Seabird Movement Patterns Relate to Previously Unrevealed Proximate Environment, Not Prey: e88424

    National Research Council Canada - National Science Library

    Yann Tremblay; Andréa Thiebault; Ralf Mullers; Pierre Pistorius

    2014-01-01

    ... environmental contexts. Here, using miniaturized video cameras and GPS tracking recorders simultaneously, we show for the first time that information on the immediate visual surroundings of a foraging seabird, the Cape...

  12. Point Counts Underestimate the Importance of Arctic Foxes as Avian Nest Predators: Evidence from Remote Video Cameras in Arctic Alaskan Oil Fields

    National Research Council Canada - National Science Library

    Joseph R. Liebezeit; Steve Zack

    2008-01-01

    We used video cameras to identify nest predators at active shorebird and passerine nests and conducted point count surveys separately to determine species richness and detection frequency of potential...

  13. Modeling 3D Unknown object by Range Finder and Video Camera ...

    African Journals Online (AJOL)

    real world); proprioceptive and exteroceptive sensors allowing the recreating of the 3D geometric database of an environment (virtual world). The virtual world is projected onto a video display terminal (VDT). Computer-generated and video ...

  14. Immersive Eating: Evaluating the Use of Head-Mounted Displays for Mixed Reality Meal sessions

    DEFF Research Database (Denmark)

    Korsgaard, Dannie Michael; Nilsson, Niels Chr.; Bjørner, Thomas

    2017-01-01

    This paper documents a pilot study evaluating a simple approach allowing users to eat real food while exploring a virtual environment (VE) through a head-mounted display (HMD). Two cameras mounted on the HMD allowed for video-based stereoscopic see-through when the user’s head orientation pointed...

  15. A Semi-Automatic, Remote-Controlled Video Observation System for Transient Luminous Events

    DEFF Research Database (Denmark)

    Allin, Thomas Højgaard; Neubert, Torsten; Laursen, Steen

    2003-01-01

    at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over...

  16. Fast auto-acquisition tomography tilt series by using HD video camera in ultra-high voltage electron microscope.

    Science.gov (United States)

    Nishi, Ryuji; Cao, Meng; Kanaji, Atsuko; Nishida, Tomoki; Yoshida, Kiyokazu; Isakozawa, Shigeto

    2014-11-01

    The ultra-high voltage electron microscope (UHVEM) H-3000 with the world highest acceleration voltage of 3 MV can observe remarkable three dimensional microstructures of microns-thick samples[1]. Acquiring a tilt series of electron tomography is laborious work and thus an automatic technique is highly desired. We proposed the Auto-Focus system using image Sharpness (AFS)[2,3] for UHVEM tomography tilt series acquisition. In the method, five images with different defocus values are firstly acquired and the image sharpness are calculated. The sharpness are then fitted to a quasi-Gaussian function to decide the best focus value[3]. Defocused images acquired by the slow scan CCD (SS-CCD) camera (Hitachi F486BK) are of high quality but one minute is taken for acquisition of five defocused images.In this study, we introduce a high-definition video camera (HD video camera; Hamamatsu Photonics K. K. C9721S) for fast acquisition of images[4]. It is an analog camera but the camera image is captured by a PC and the effective image resolution is 1280×1023 pixels. This resolution is lower than that of the SS-CCD camera of 4096×4096 pixels. However, the HD video camera captures one image for only 1/30 second. In exchange for the faster acquisition the S/N of images are low. To improve the S/N, 22 captured frames are integrated so that each image sharpness is enough to become lower fitting error. As countermeasure against low resolution, we selected a large defocus step, which is typically five times of the manual defocus step, to discriminate different defocused images.By using HD video camera for autofocus process, the time consumption for each autofocus procedure was reduced to about six seconds. It took one second for correction of an image position and the total correction time was seven seconds, which was shorter by one order than that using SS-CCD camera. When we used SS-CCD camera for final image capture, it took 30 seconds to record one tilt image. We can obtain a tilt

  17. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    Energy Technology Data Exchange (ETDEWEB)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L [UT MD Anderson Cancer Center, Houston, TX (United States); UT Graduate School of Biomedical Sciences, Houston, TX (United States); Yang, J; Beadle, B [UT MD Anderson Cancer Center, Houston, TX (United States)

    2014-06-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.

  18. A comparison of head-mounted and hand-held displays for 360° videos with focus on attitude and behavior change

    DEFF Research Database (Denmark)

    Fonseca, Diana; Kraus, Martin

    2016-01-01

    The present study is designed to test how immersion, presence, and narrative content (with a focus on emotional immersion) can affect one's pro-environmental attitude and behavior with specific interest in 360° videos and meat consumption as a non pro-environmental behavior. This research describes...... a between-group design experiment that compares two systems with different levels of immersion and two types of narratives, one with and one without emotional content. In the immersive video (IV) condition (high immersion), 21 participants used a Head-Mounted Display (HMD) to watch an emotional 360° video...... about meat consumption and its effects on the environment; another 21 participants experienced the tablet condition (low immersion) where they viewed the same video but with a 10.1 inch tablet; 22 participants in the control condition viewed a non emotional video about submarines with an HMD...

  19. CARVE: In-flight Videos from the CARVE Aircraft, Alaska, 2012-2015

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset contains videos captured by a camera mounted on the CARVE aircraft during airborne campaigns over the Alaskan and Canadian Arctic for the Carbon in...

  20. Modeling 3D Unknown object by Range Finder and Video Camera ...

    African Journals Online (AJOL)

    Computer-generated and video images are superimposed. The man-machine interface functions deal mainly with on line building of graphic aids to improve perception, updating the geometric database of the robotic site, and video control of the robot. The superimposition of the real and virtual worlds is carried out through ...

  1. “First-person view” of pathogen transmission and hand hygiene – use of a new head-mounted video capture and coding tool

    Directory of Open Access Journals (Sweden)

    Lauren Clack

    2017-10-01

    Full Text Available Abstract Background Healthcare workers’ hands are the foremost means of pathogen transmission in healthcare, but detailed hand trajectories have been insufficiently researched so far. We developed and applied a new method to systematically document hand-to-surface exposures (HSE to delineate true hand transmission pathways in real-life healthcare settings. Methods A head-mounted camera and commercial coding software were used to capture ten active care episodes by eight nurses and two physicians and code HSE type and duration using a hierarchical coding scheme. We identified HSE sequences of particular relevance to infectious risks for patients based on the WHO ‘Five Moments for Hand Hygiene’. The study took place in a trauma intensive care unit in a 900-bed university hospital in Switzerland. Results Overall, the ten videos totaled 296.5 min and featured eight nurses and two physicians. A total of 4222 HSE were identified (1 HSE every 4.2 s, which concerned bare (79% and gloved (21% hands. The HSE inside the patient zone (n = 1775; 42% included mobile objects (33%, immobile surfaces (5%, and patient intact skin (4%, while HSE outside the patient zone (n = 1953; 46% included HCW’s own body (10%, mobile objects (28%, and immobile surfaces (8%. A further 494 (12% events involved patient critical sites. Sequential analysis revealed 291 HSE transitions from outside to inside patient zone, i.e. “colonization events”, and 217 from any surface to critical sites, i.e. “infection events”. Hand hygiene occurred 97 times, 14 (5% adherence times at colonization events and three (1% adherence times at infection events. On average, hand rubbing lasted 13 ± 9 s. Conclusions The abundance of HSE underscores the central role of hands in the spread of potential pathogens while hand hygiene occurred rarely at potential colonization and infection events. Our approach produced a valid video and coding instrument for in-depth analysis of

  2. "First-person view" of pathogen transmission and hand hygiene - use of a new head-mounted video capture and coding tool.

    Science.gov (United States)

    Clack, Lauren; Scotoni, Manuela; Wolfensberger, Aline; Sax, Hugo

    2017-01-01

    Healthcare workers' hands are the foremost means of pathogen transmission in healthcare, but detailed hand trajectories have been insufficiently researched so far. We developed and applied a new method to systematically document hand-to-surface exposures (HSE) to delineate true hand transmission pathways in real-life healthcare settings. A head-mounted camera and commercial coding software were used to capture ten active care episodes by eight nurses and two physicians and code HSE type and duration using a hierarchical coding scheme. We identified HSE sequences of particular relevance to infectious risks for patients based on the WHO 'Five Moments for Hand Hygiene'. The study took place in a trauma intensive care unit in a 900-bed university hospital in Switzerland. Overall, the ten videos totaled 296.5 min and featured eight nurses and two physicians. A total of 4222 HSE were identified (1 HSE every 4.2 s), which concerned bare (79%) and gloved (21%) hands. The HSE inside the patient zone (n = 1775; 42%) included mobile objects (33%), immobile surfaces (5%), and patient intact skin (4%), while HSE outside the patient zone (n = 1953; 46%) included HCW's own body (10%), mobile objects (28%), and immobile surfaces (8%). A further 494 (12%) events involved patient critical sites. Sequential analysis revealed 291 HSE transitions from outside to inside patient zone, i.e. "colonization events", and 217 from any surface to critical sites, i.e. "infection events". Hand hygiene occurred 97 times, 14 (5% adherence) times at colonization events and three (1% adherence) times at infection events. On average, hand rubbing lasted 13 ± 9 s. The abundance of HSE underscores the central role of hands in the spread of potential pathogens while hand hygiene occurred rarely at potential colonization and infection events. Our approach produced a valid video and coding instrument for in-depth analysis of hand trajectories during active patient care that may help to design

  3. Surgical video recording with a modified GoPro Hero 4 camera

    National Research Council Canada - National Science Library

    Lin, Lily Koo

    2016-01-01

    .... This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery...

  4. Real-Time Range Sensing Video Camera for Human/Robot Interfacing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In comparison to stereovision, it is well known that structured-light illumination has distinct advantages including the use of only one camera, being significantly...

  5. Small high-definition video cameras as a tool to resight uniquely marked Interior Least Terns (Sternula antillarum athalassos)

    Science.gov (United States)

    Toy, Dustin L.; Roche, Erin; Dovichin, Colin M.

    2017-01-01

    Many bird species of conservation concern have behavioral or morphological traits that make it difficult for researchers to determine if the birds have been uniquely marked. Those traits can also increase the difficulty for researchers to decipher those markers. As a result, it is a priority for field biologists to develop time- and cost-efficient methods to resight uniquely marked individuals, especially when efforts are spread across multiple States and study areas. The Interior Least Tern (Sternula antillarum athalassos) is one such difficult-to-resight species; its tendency to mob perceived threats, such as observing researchers, makes resighting marked individuals difficult without physical recapture. During 2015, uniquely marked adult Interior Least Terns were resighted and identified by small, inexpensive, high-definition portable video cameras deployed for 29-min periods adjacent to nests. Interior Least Tern individuals were uniquely identified 84% (n = 277) of the time. This method also provided the ability to link individually marked adults to a specific nest, which can aid in generational studies and understanding heritability for difficult-to-resight species. Mark-recapture studies on such species may be prone to sparse encounter data that can result in imprecise or biased demographic estimates and ultimately flawed inferences. High-definition video cameras may prove to be a robust method for generating reliable demographic estimates.

  6. Spatial and temporal scales of shoreline morphodynamics derived from video camera observations for the island of Sylt, German Wadden Sea

    Science.gov (United States)

    Blossier, Brice; Bryan, Karin R.; Daly, Christopher J.; Winter, Christian

    2017-04-01

    Spatial and temporal scales of beach morphodynamics were assessed for the island of Sylt, German Wadden Sea, based on continuous video camera monitoring data from 2011 to 2014 along a 1.3 km stretch of sandy beach. They served to quantify, at this location, the amount of shoreline variability covered by beach monitoring schemes, depending on the time interval and alongshore resolution of the surveys. Correlation methods, used to quantify the alongshore spatial scales of shoreline undulations, were combined with semi-empirical modelling and spectral analyses of shoreline temporal fluctuations. The data demonstrate that an alongshore resolution of 150 m and a monthly survey time interval capture 70% of the kilometre-scale shoreline variability over the 2011-2014 study period. An alongshore spacing of 10 m and a survey time interval of 5 days would be required to monitor 95% variance of the shoreline temporal fluctuations with steps of 5% changes in variance over space. Although monitoring strategies such as land or airborne surveying are reliable methods of data collection, video camera deployment remains the cheapest technique providing the high spatiotemporal resolution required to monitor subkilometre-scale morphodynamic processes involving, for example, small- to middle-sized beach nourishment.

  7. Installing Snowplow Cameras and Integrating Images into MnDOT's Traveler Information System

    Science.gov (United States)

    2017-10-01

    In 2015 and 2016, the Minnesota Department of Transportation (MnDOT) installed network video dash- and ceiling-mounted cameras on 226 snowplows, approximately one-quarter of MnDOT's total snowplow fleet. The cameras were integrated with the onboard m...

  8. Video laryngoscopy in paediatric anaesthesia in South Africa

    African Journals Online (AJOL)

    2011-01-18

    Jan 18, 2011 ... the CMOS active pixel sensor (CMOS APS) video camera, which is mounted on a laryngoscope blade to generate a view of the anatomical structures. Although video laryngoscopes are based on the same technique as direct laryngoscopy, their use requires a different skill set. The VL blade is inserted in ...

  9. A new method to calculate the camera focusing area and player position on playfield in soccer video

    Science.gov (United States)

    Liu, Yang; Huang, Qingming; Ye, Qixiang; Gao, Wen

    2005-07-01

    Sports video enrichment is attracting many researchers. People want to appreciate some highlight segments with cartoon. In order to automatically generate these cartoon video, we have to estimate the players" and ball"s 3D position. In this paper, we propose an algorithm to cope with the former problem, i.e. to compute players" position on court. For the image with sufficient corresponding points, the algorithm uses these points to calibrate the map relationship between image and playfield plane (called as homography). For the images without enough corresponding points, we use global motion estimation (GME) and the already calibrated image to compute the images" homographies. Thus, the problem boils down to estimating global motion. To enhance the performance of global motion estimation, two strategies are exploited. The first one is removing the moving objects based on adaptive GMM playfield detection, which can eliminate the influence of non-still object; The second one is using LKT tracking feature points to determine horizontal and vertical translation, which makes the optimization process for GME avoid being trapped into local minimum. Thus, if some images of a sequence can be calibrated directly from the intersection points of court line, all images of the sequence can by calibrated through GME. When we know the homographies between image and playfield, we can compute the camera focusing area and players" position in real world. We have tested our algorithm on real video and the result is encouraging.

  10. Aircraft engine-mounted camera system for long wavelength infrared imaging of in-service thermal barrier coated turbine blades.

    Science.gov (United States)

    Markham, James; Cosgrove, Joseph; Scire, James; Haldeman, Charles; Agoos, Ian

    2014-12-01

    This paper announces the implementation of a long wavelength infrared camera to obtain high-speed thermal images of an aircraft engine's in-service thermal barrier coated turbine blades. Long wavelength thermal images were captured of first-stage blades. The achieved temporal and spatial resolutions allowed for the identification of cooling-hole locations. The software and synchronization components of the system allowed for the selection of any blade on the turbine wheel, with tuning capability to image from leading edge to trailing edge. Its first application delivered calibrated thermal images as a function of turbine rotational speed at both steady state conditions and during engine transients. In advance of presenting these data for the purpose of understanding engine operation, this paper focuses on the components of the system, verification of high-speed synchronized operation, and the integration of the system with the commercial jet engine test bed.

  11. AUTOMATIC TEXTURE MAPPING WITH AN OMNIDIRECTIONAL CAMERA MOUNTED ON A VEHICLE TOWARDS LARGE SCALE 3D CITY MODELS

    Directory of Open Access Journals (Sweden)

    F. Deng

    2012-07-01

    Full Text Available Today high resolution panoramic images with competitive quality have been widely used for rendering in some commercial systems. However the potential applications such as mapping, augmented reality and modelling which need accurate orientation information are still poorly studied. Urban models can be quickly obtained from aerial images or LIDAR, however with limited quality or efficiency due to low resolution textures and manual texture mapping work flow. We combine an Extended Kalman Filter (EKF with the traditional Structure from Motion (SFM method without any prior information based on a general camera model which can handle various kinds of omnidirectional and other kind of single perspective image sequences even with unconnected or weakly connected frames. The orientation results is then applied to mapping the textures from panoramas to the existing building models obtained from aerial photogrammetry. It turns out to largely improve the quality of the models and the efficiency of the modelling procedure.

  12. Performance of compact ICU (intensified camera unit) with autogating based on video signal

    Science.gov (United States)

    de Groot, Arjan; Linotte, Peter; van Veen, Django; de Witte, Martijn; Laurent, Nicolas; Hiddema, Arend; Lalkens, Fred; van Spijker, Jan

    2007-10-01

    High quality night vision digital video is nowadays required for many observation, surveillance and targeting applications, including several of the current soldier modernization programs. We present the performance increase that is obtained when combining a state-of-the-art image intensifier with a low power consumption CMOS image sensor. Based on the content of the video signal, the gating and gain of the image intensifier are optimized for best SNR. The options of the interface with a separate laser in the application for range gated imaging are discussed.

  13. Lights, Camera, Action: Facilitating the Design and Production of Effective Instructional Videos

    Science.gov (United States)

    Di Paolo, Terry; Wakefield, Jenny S.; Mills, Leila A.; Baker, Laura

    2017-01-01

    This paper outlines a rudimentary process intended to guide faculty in K-12 and higher education through the steps involved to produce video for their classes. The process comprises four steps: planning, development, delivery and reflection. Each step is infused with instructional design information intended to support the collaboration between…

  14. Using High Speed Smartphone Cameras and Video Analysis Techniques to Teach Mechanical Wave Physics

    Science.gov (United States)

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-01-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses…

  15. Real-time multi-camera video acquisition and processing platform for ADAS

    Science.gov (United States)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  16. Surgical video recording with a modified GoPro Hero 4 camera

    OpenAIRE

    Lin LK

    2016-01-01

    Lily Koo Lin Department of Ophthalmology and Vision Science, University of California, Davis Eye Center, Sacramento, CA, USA Background: Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Me...

  17. Measurement and processing of signatures in the visible range using a calibrated video camera and the CAMDET software package

    Science.gov (United States)

    Sheffer, Dan

    1997-06-01

    A procedure for calibration of a color video camera has been developed at EORD. The RGB values of standard samples, together with the spectral radiance values of the samples, are used to calculate a transformation matrix between the RGB and CIEXYZ color spaces. The transformation matrix is then used to calculate the XYZ color coordinates of distant objects imaged in the field. These, in turn, are used in order to calculate the CIELAB color coordinates of the objects. Good agreement between the calculated coordinates and those obtained from spectroradiometric data is achieved. Processing of the RGB values of pixels in the digital image of a scene using the CAMDET software package which was developed at EORD, results in `Painting Maps' in which the true apparent CIELAB color coordinates are used. The paper discusses the calibration procedure, its advantages and shortcomings and suggests a definition for the visible signature of objects. The Camdet software package is described and some examples are given.

  18. Using a slit lamp-mounted digital high-speed camera for dynamic observation of phakic lenses during eye movements: a pilot study

    Directory of Open Access Journals (Sweden)

    Leitritz MA

    2014-07-01

    Full Text Available Martin Alexander Leitritz, Focke Ziemssen, Karl Ulrich Bartz-Schmidt, Bogomil Voykov Centre for Ophthalmology, University Eye Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany Purpose: To evaluate a digital high-speed camera combined with digital morphometry software for dynamic measurements of phakic intraocular lens movements to observe kinetic influences, particularly in fast direction changes and at lateral end points. Materials and methods: A high-speed camera taking 300 frames per second observed movements of eight iris-claw intraocular lenses and two angle-supported intraocular lenses. Standardized saccades were performed by the patients to trigger mass inertia with lens position changes. Freeze images with maximum deviation were used for digital software-based morphometry analysis with ImageJ.Results: Two eyes from each of five patients (median age 32 years, range 28–45 years without findings other than refractive errors were included. The high-speed images showed sufficient usability for further morphometric processing. In the primary eye position, the median decentrations downward and in a lateral direction were -0.32 mm (range -0.69 to 0.024 and 0.175 mm (range -0.37 to 0.45, respectively. Despite the small sample size of asymptomatic patients, we found a considerable amount of lens dislocation. The median distance amplitude during eye movements was 0.158 mm (range 0.02–0.84. There was a slight positive corrlation (r=0.39, P<0.001 between the grade of deviation in the primary position and the distance increase triggered by movements.Conclusion: With the use of a slit lamp-mounted high-speed camera system and morphometry software, observation and objective measurements of iris-claw intraocular lenses and angle-supported intraocular lenses movements seem to be possible. Slight decentration in the primary position might be an indicator of increased lens mobility during kinetic stress during eye movements

  19. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    Science.gov (United States)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  20. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    Science.gov (United States)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  1. A Refrigerated Web Camera for Photogrammetric Video Measurement inside Biomass Boilers and Combustion Analysis

    Directory of Open Access Journals (Sweden)

    Enrique Granada

    2011-01-01

    Full Text Available This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes.

  2. Using high speed smartphone cameras and video analysis techniques to teach mechanical wave physics

    Science.gov (United States)

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-07-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses along a spring and the period of transverse standing waves generated in the same spring. These experiments can be helpful in addressing several relevant concepts about the physics of mechanical waves and in overcoming some of the typical student misconceptions in this same field.

  3. Social interactions of juvenile brown boobies at sea as observed with animal-borne video cameras.

    Directory of Open Access Journals (Sweden)

    Ken Yoda

    Full Text Available While social interactions play a crucial role on the development of young individuals, those of highly mobile juvenile birds in inaccessible environments are difficult to observe. In this study, we deployed miniaturised video recorders on juvenile brown boobies Sula leucogaster, which had been hand-fed beginning a few days after hatching, to examine how social interactions between tagged juveniles and other birds affected their flight and foraging behaviour. Juveniles flew longer with congeners, especially with adult birds, than solitarily. In addition, approximately 40% of foraging occurred close to aggregations of congeners and other species. Young seabirds voluntarily followed other birds, which may directly enhance their foraging success and improve foraging and flying skills during their developmental stage, or both.

  4. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  5. Color Helmet Mounted Display System with Real Time Computer Generated and Video Imagery for In-Flight Simulation

    Science.gov (United States)

    Sawyer, Kevin; Jacobsen, Robert; Aiken, Edwin W. (Technical Monitor)

    1995-01-01

    NASA Ames Research Center and the US Army are developing the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) using a Sikorsky UH-60 helicopter for the purpose of flight systems research. A primary use of the RASCAL is in-flight simulation for which the visual scene will use computer generated imagery and synthetic vision. This research is made possible in part to a full color wide field of view Helmet Mounted Display (HMD) system that provides high performance color imagery suitable for daytime operations in a flight-rated package. This paper describes the design and performance characteristics of the HMD system. Emphasis is placed on the design specifications, testing, and integration into the aircraft of Kaiser Electronics' RASCAL HMD system that was designed and built under contract for NASA. The optical performance and design of the Helmet mounted display unit will be discussed as well as the unique capabilities provided by the system's Programmable Display Generator (PDG).

  6. Real-time Video-Streaming to Surgical Loupe Mounted Head-Up Display for Navigated Meningioma Resection.

    Science.gov (United States)

    Diaz, Roberto; Yoon, Jang; Chen, Robert; Quinones-Hinojosa, Alfredo; Wharen, Robert; Komotar, Ricardo

    2017-04-30

    Wearable technology interfaces with normal human movement and function, thereby enabling more efficient and adaptable use.We developed a wearable display system for use with intra-operative neuronavigation for brain tumor surgery. The Google glass head-up display system was adapted to surgical loupes with a video-streaming integrated hardware and software device for display of the Stealth S7 navigation screen. Phantom trials of surface ventriculostomy were performed. The device was utilized as an alternative display screen during cranial surgery. Image-guided brain tumor resection was accomplished using Google Glass head-up display of Stealth S7 navigation images. Visual display consists of navigation video-streaming over a wireless network. The integrated system developed for video-streaming permits video data display to the operating surgeon without requiring movement of the head away from the operative field. Google Glass head-up display can be used for intra-operative neuronavigation in the setting of intracranial tumor resection.

  7. Comparison of handheld video camera and GAITRite® measurement of gait impairment in people with early stage Parkinson's disease: a pilot study.

    Science.gov (United States)

    Beijer, Tim R; Lord, Stephen R; Brodie, Matthew A D

    2013-01-01

    In this pilot study, we investigated the validity and reliability of low-cost handheld video camera recordings for measuring gait in people with early stage Parkinson's disease (PD). Five participants with PD, Hoehn & Yahr stage I-II, mean age 66.2 years and five healthy age-matched controls were recruited. Participants walked across a GAITRite® electronic walkway at self-selected pace while video was simultaneously recorded. Data from both systems were analyzed and compared. Step time variability, measured from handheld video recordings, revealed significant (p ≤ 0.05) differences between the gait of early stage PD and controls. Concurrent validity between video analyses and GAITRite were good (ICC(2,1) ≥ 0.86) for mean step time and mean dual support duration. However, the inter-assessor reliability for the video analysis was poor for step time variability (ICC(2,1) = 0.18). More reliable measurement of step time variability may require a system to measure extended periods of walking. Further research involving longer walks and more participants with higher stages of PD is required to investigate if step time variability can be measured with acceptable reliability using video recordings. If this could be demonstrated, this simple technology could be adapted to run on a tablet or smart phone, providing low cost gait assessments without the need for specialized equipment and expensive infrastructure.

  8. Development of a Wireless Video Transfer System for Remote Control of a Lightweight UAV

    OpenAIRE

    Tosteberg, Joakim; Axelsson, Thomas

    2012-01-01

    A team of developers from Epsilon AB has developed a lightweight remote controlledquadcopter named Crazyflie. The team wants to allow a pilot to navigate thequadcopter using video from an on-board camera as the only guidance. The masterthesis evaluates the feasibility of mounting a camera module on the quadcopter andstreaming images from the camera to a computer, using the existing quadcopterradio link. Using theoretical calculations and measurements, a set of requirementsthat must be fulfill...

  9. Improved Tracking of Targets by Cameras on a Mars Rover

    Science.gov (United States)

    Kim, Won; Ansar, Adnan; Steele, Robert

    2007-01-01

    A paper describes a method devised to increase the robustness and accuracy of tracking of targets by means of three stereoscopic pairs of video cameras on a Mars-rover-type exploratory robotic vehicle. Two of the camera pairs are mounted on a mast that can be adjusted in pan and tilt; the third camera pair is mounted on the main vehicle body. Elements of the method include a mast calibration, a camera-pointing algorithm, and a purely geometric technique for handing off tracking between different camera pairs at critical distances as the rover approaches a target of interest. The mast calibration is an extension of camera calibration in which the camera images of calibration targets at known positions are collected at various pan and tilt angles. In the camerapointing algorithm, pan and tilt angles are computed by a closed-form, non-iterative solution of inverse kinematics of the mast combined with mathematical models of the cameras. The purely geometric camera-handoff technique involves the use of stereoscopic views of a target of interest in conjunction with the mast calibration.

  10. Visual Acuity and Contrast Sensitivity with compressed motion video

    NARCIS (Netherlands)

    Bijl, P.; Vries, S.C. de

    2009-01-01

    Video of Visual Acuity (VA) and Contrast Sensitivity (CS) test charts in a complex background was recorded using a CCD camera mounted on a computer-controlled tripod and fed into real-time MPEG2 compression/decompression equipment. The test charts were based on the Triangle Orientation

  11. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  12. Initial evaluation of prospective cardiac triggering using photoplethysmography signals recorded with a video camera compared to pulse oximetry and electrocardiography at 7T MRI.

    Science.gov (United States)

    Spicher, Nicolai; Kukuk, Markus; Maderwald, Stefan; Ladd, Mark E

    2016-11-24

    Accurate synchronization between magnetic resonance imaging data acquisition and a subject's cardiac activity ("triggering") is essential for reducing image artifacts but conventional, contact-based methods for this task are limited by several factors, including preparation time, patient inconvenience, and susceptibility to signal degradation. The purpose of this work is to evaluate the performance of a new contact-free triggering method developed with the aim to eventually replace conventional methods in non-cardiac imaging applications. In this study, the method's performance is evaluated in the context of 7 Tesla non-enhanced angiography of the lower extremities. Our main contribution is a basic algorithm capable of estimating in real-time the phase of the cardiac cycle from reflection photoplethysmography signals obtained from skin color variations of the forehead recorded with a video camera. Instead of finding the algorithm's parameters heuristically, they were optimized using videos of the forehead as well as electrocardiography and pulse oximetry signals that were recorded from eight healthy volunteers in and outside the scanner, with and without active radio frequency and gradient coils. Based on the video characteristics, synthetic signals were generated and the "best available" values of an objective function were determined using mathematical optimization. The performance of the proposed method with optimized algorithm parameters was evaluated by applying it to the recorded videos and comparing the computed triggers to those of contact-based methods. Additionally, the method was evaluated by using its triggers for acquiring images from a healthy volunteer and comparing the result to images obtained using pulse oximetry triggering. During evaluation of the videos recorded inside the bore with active radio frequency and gradient coils, the pulse oximeter triggers were labeled in 62.5% as "potentially usable" for cardiac triggering, the electrocardiography

  13. Dual camera system for acquisition of high resolution images

    Science.gov (United States)

    Papon, Jeremie A.; Broussard, Randy P.; Ives, Robert W.

    2007-02-01

    Video surveillance is ubiquitous in modern society, but surveillance cameras are severely limited in utility by their low resolution. With this in mind, we have developed a system that can autonomously take high resolution still frame images of moving objects. In order to do this, we combine a low resolution video camera and a high resolution still frame camera mounted on a pan/tilt mount. In order to determine what should be photographed (objects of interest), we employ a hierarchical method which first separates foreground from background using a temporal-based median filtering technique. We then use a feed-forward neural network classifier on the foreground regions to determine whether the regions contain the objects of interest. This is done over several frames, and a motion vector is deduced for the object. The pan/tilt mount then focuses the high resolution camera on the next predicted location of the object, and an image is acquired. All components are controlled through a single MATLAB graphical user interface (GUI). The final system we present will be able to detect multiple moving objects simultaneously, track them, and acquire high resolution images of them. Results will demonstrate performance tracking and imaging varying numbers of objects moving at different speeds.

  14. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    Science.gov (United States)

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik

    2017-03-01

    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  15. What does video-camera framing say during the news? A look at contemporary forms of visual journalism

    Directory of Open Access Journals (Sweden)

    Juliana Freire Gutmann

    2012-12-01

    Full Text Available In order to contribute to the discussion about audiovisual processing of journalistic information, this article examines connections between the uses of video framing on the television news stage, contemporary senses, public interest and the distinction values of journalism, addressed here through the perspective of the concepts of conversation and participation. The article identifies recurring video framing techniques used by 15 Brazilian television newscasts, accounting for contemporary forms of audiovisual telejournalism, responsible for new types of spatial-temporal configurations. From a methodological perspective, this article seeks to contribute to the study of the television genre by understanding the uses of these audiovisual techniques as a strategy for newscast communicability.

  16. WHAT DOES VIDEO-CAMERA FRAMING SAY DURING THE NEWS? A LOOK AT CONTEMPORARY FORMS OF VISUAL JOURNALISM

    Directory of Open Access Journals (Sweden)

    Juliana Freire Gutmann

    2013-06-01

    Full Text Available In order to contribute to the discussion about audiovisual processing of journalistic information, this article examines connections between the uses of video framing on the television news stage, contemporary senses, public interest and the distinction values of journalism, addressed here through the perspective of the concepts of conversation and participation. The article identifies recurring video framing techniques used by 15 Brazilian television newscasts, accounting for contemporary forms of audiovisual telejournalism, responsible for new types of spatial-temporal configurations. From a methodological perspective, this article seeks to contribute to the study of the television genre by understanding the uses of these audiovisual techniques as a strategy for newscast communicability.

  17. Networked telepresence system using web browsers and omni-directional video streams

    Science.gov (United States)

    Ishikawa, Tomoya; Yamazawa, Kazumasa; Sato, Tomokazu; Ikeda, Sei; Nakamura, Yutaka; Fujikawa, Kazutoshi; Sunahara, Hideki; Yokoya, Naokazu

    2005-03-01

    In this paper, we describe a new telepresence system which enables a user to look around a virtualized real world easily in network environments. The proposed system includes omni-directional video viewers on web browsers and allows the user to look around the omni-directional video contents on the web browsers. The omni-directional video viewer is implemented as an Active-X program so that the user can install the viewer automatically only by opening the web site which contains the omni-directional video contents. The system allows many users at different sites to look around the scene just like an interactive TV using a multi-cast protocol without increasing the network traffic. This paper describes the implemented system and the experiments using live and stored video streams. In the experiment with stored video streams, the system uses an omni-directional multi-camera system for video capturing. We can look around high resolution and high quality video contents. In the experiment with live video streams, a car-mounted omni-directional camera acquires omni-directional video streams surrounding the car, running in an outdoor environment. The acquired video streams are transferred to the remote site through the wireless and wired network using multi-cast protocol. We can see the live video contents freely in arbitrary direction. In the both experiments, we have implemented a view-dependent presentation with a head-mounted display (HMD) and a gyro sensor for realizing more rich presence.

  18. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    Science.gov (United States)

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  19. Potential of video cameras in assessing event and seasonal coastline behaviour: Grand Popo, Benin (Gulf of Guinea)

    NARCIS (Netherlands)

    Abessolo Ondoa, G.; Almar, R.; Kestenare, E.; Bahini, A.; Houngue, G-H.; Jouanno, J; Du Penhoat, Y.; Castelle, B.; Melet, A.; Meyssignac, B.; Anthony, E.J.; Laibi, R.; Alory, G.; Ranasinghe, Ranasinghe W M R J B

    2016-01-01

    In this study, we explore the potential of a nearshore video system to obtain a long-term estimation of coastal variables (shoreline, beach slope, sea level elevation and wave forcing) at Grand Popo beach, Benin, West Africa, from March 2013 to February 2015. We first present a validation of the

  20. The Effect of Smartphone Video Camera as a Tool to Create Gigital Stories for English Learning Purposes

    Science.gov (United States)

    Gromik, Nicolas A.

    2015-01-01

    The integration of smartphones in the language learning environment is gaining research interest. However, using a smartphone to learn to speak spontaneously has received little attention. The emergence of smartphone technology and its video recording feature are recognised as suitable learning tools. This paper reports on a case study conducted…

  1. What Does the Camera Communicate? An Inquiry into the Politics and Possibilities of Video Research on Learning

    Science.gov (United States)

    Vossoughi, Shirin; Escudé, Meg

    2016-01-01

    This piece explores the politics and possibilities of video research on learning in educational settings. The authors (a research-practice team) argue that changing the stance of inquiry from "surveillance" to "relationship" is an ongoing and contingent practice that involves pedagogical, political, and ethical choices on the…

  2. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  3. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  4. Estimation of skeletal movement of human locomotion from body surface shapes using dynamic spatial video camera (DSVC) and 4D human model.

    Science.gov (United States)

    Saito, Toshikuni; Suzuki, Naoki; Hattori, Asaki; Suzuki, Shigeyuki; Hayashibe, Mitsuhiro; Otake, Yoshito

    2006-01-01

    We have been developing a DSVC (Dynamic Spatial Video Camera) system to measure and observe human locomotion quantitatively and freely. A 4D (four-dimensional) human model with detailed skeletal structure, joint, muscle, and motor functionality has been built. The purpose of our research was to estimate skeletal movements from body surface shapes using DSVC and the 4D human model. For this purpose, we constructed a body surface model of a subject and resized the standard 4D human model to match with geometrical features of the subject's body surface model. Software that integrates the DSVC system and the 4D human model, and allows dynamic skeletal state analysis from body surface movement data was also developed. We practically applied the developed system in dynamic skeletal state analysis of a lower limb in motion and were able to visualize the motion using geometrically resized standard 4D human model.

  5. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  6. A video rate laser scanning confocal microscope

    Science.gov (United States)

    Ma, Hongzhou; Jiang, James; Ren, Hongwu; Cable, Alex E.

    2008-02-01

    A video-rate laser scanning microscope was developed as an imaging engine to integrate with other photonic building blocks to fulfill various microscopic imaging applications. The system is quipped with diode laser source, resonant scanner, galvo scanner, control electronic and computer loaded with data acquisition boards and imaging software. Based on an open frame design, the system can be combined with varies optics to perform the functions of fluorescence confocal microscopy, multi-photon microscopy and backscattering confocal microscopy. Mounted to the camera port, it allows a traditional microscope to obtain confocal images at video rate. In this paper, we will describe the design principle and demonstrate examples of applications.

  7. Visual acuity, contrast sensitivity, and range performance with compressed motion video

    NARCIS (Netherlands)

    Bijl, P.; Vries, S.C. de

    2010-01-01

    Video of visual acuity (VA) and contrast sensitivity (CS) test charts in a complex background was recorded using a CCD color camera mounted on a computer-controlled tripod and was fed into real-time MPEG-2 compression/decompression equipment. The test charts were based on the triangle orientation

  8. Video Head Impulse Tests with a Remote Camera System: Normative Values of Semicircular Canal Vestibulo-Ocular Reflex Gain in Infants and Children

    Directory of Open Access Journals (Sweden)

    Sylvette R. Wiener-Vacher

    2017-09-01

    Full Text Available The video head impulse test (VHIT is widely used to identify semicircular canal function impairments in adults. But classical VHIT testing systems attach goggles tightly to the head, which is not tolerated by infants. Remote video detection of head and eye movements resolves this issue and, here, we report VHIT protocols and normative values for children. Vestibulo-ocular reflex (VOR gain was measured for all canals of 303 healthy subjects, including 274 children (aged 2.6 months–15 years and 26 adults (aged 16–67. We used the Synapsys® (Marseilles, France VHIT Ulmer system whose remote camera measures head and eye movements. HITs were performed at high velocities. Testing typically lasts 5–10 min. In infants as young as 3 months old, VHIT yielded good inter-measure replicability. VOR gain increases rapidly until about the age of 6 years (with variation among canals, then progresses more slowly to reach adult values by the age of 16. Values are more variable among very young children and for the vertical canals, but showed no difference for right versus left head rotations. Normative values of VOR gain are presented to help detect vestibular impairment in patients. VHIT testing prior to cochlear implants could help prevent total vestibular loss and the resulting grave impairments of motor and cognitive development in patients with residual unilateral vestibular function.

  9. Using a slit lamp-mounted digital high-speed camera for dynamic observation of phakic lenses during eye movements: a pilot study.

    Science.gov (United States)

    Leitritz, Martin Alexander; Ziemssen, Focke; Bartz-Schmidt, Karl Ulrich; Voykov, Bogomil

    2014-01-01

    To evaluate a digital high-speed camera combined with digital morphometry software for dynamic measurements of phakic intraocular lens movements to observe kinetic influences, particularly in fast direction changes and at lateral end points. A high-speed camera taking 300 frames per second observed movements of eight iris-claw intraocular lenses and two angle-supported intraocular lenses. Standardized saccades were performed by the patients to trigger mass inertia with lens position changes. Freeze images with maximum deviation were used for digital software-based morphometry analysis with ImageJ. Two eyes from each of five patients (median age 32 years, range 28-45 years) without findings other than refractive errors were included. The high-speed images showed sufficient usability for further morphometric processing. In the primary eye position, the median decentrations downward and in a lateral direction were -0.32 mm (range -0.69 to 0.024) and 0.175 mm (range -0.37 to 0.45), respectively. Despite the small sample size of asymptomatic patients, we found a considerable amount of lens dislocation. The median distance amplitude during eye movements was 0.158 mm (range 0.02-0.84). There was a slight positive correlation (r=0.39, Pcamera system and morphometry software, observation and objective measurements of iris-claw intraocular lenses and angle-supported intraocular lenses movements seem to be possible. Slight decentration in the primary position might be an indicator of increased lens mobility during kinetic stress during eye movements. Long-term assessment by high-speed analysis with higher case numbers has to clarify the relationship between progressing motility and endothelial cell damage.

  10. Parallax error in the monocular head-mounted eye trackers

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Witzner Hansen, Dan

    2012-01-01

    This paper investigates the parallax error, which is a common problem of many video-based monocular mobile gaze trackers. The parallax error is defined and described using the epipolar geometry in a stereo camera setup. The main parameters that change the error are introduced and it is shown how...... each parameter affects the error. The optimum distribution of the error (magnitude and direction) in the field of view varies for different applications. However, the results can be used for finding the optimum parameters that are needed for designing a head-mounted gaze tracker. It has been shown...

  11. Body worn camera

    Science.gov (United States)

    Aishwariya, A.; Pallavi Sudhir, Gulavani; Garg, Nemesa; Karthikeyan, B.

    2017-11-01

    A body worn camera is small video camera worn on the body, typically used by police officers to record arrests, evidence from crime scenes. It helps preventing and resolving complaints brought by members of the public; and strengthening police transparency, performance, and accountability. The main constants of this type of the system are video format, resolution, frames rate, and audio quality. This system records the video in .mp4 format with 1080p resolution and 30 frames per second. One more important aspect to while designing this system is amount of power the system requires as battery management becomes very critical. The main design challenges are Size of the Video, Audio for the video. Combining both audio and video and saving it in .mp4 format, Battery, size that is required for 8 hours of continuous recording, Security. For prototyping this system is implemented using Raspberry Pi model B.

  12. The Use of Smart Glasses for Surgical Video Streaming.

    Science.gov (United States)

    Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu

    2017-04-01

    Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.

  13. Digital video.

    Science.gov (United States)

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.

  14. Resolution of slit-lamp microscopy photography using various cameras.

    Science.gov (United States)

    Ye, Yufeng; Jiang, Hong; Zhang, Huicheng; Karp, Carol L; Zhong, Jianguang; Tao, Aizhu; Shao, Yilei; Wang, Jianhua

    2013-05-01

    To evaluate the resolutions of slit-lamp microscopy photography using various cameras. Evaluation of diagnostic test or technology. Healthy subjects were imaged with these adapted cameras through slit-lamp microscopy. A total of 8 cameras, including 6 custom-mounted slit-lamp cameras and 2 commercial slit-lamp cameras, were tested with standard slit-lamp microscopy devices for imaging of the eye. Various magnifications were used during imaging. A standard resolution test plate was used to test the resolutions at different magnifications. These outcomes were compared with commercial slit-lamp cameras. The main measurements included the display spatial resolutions, image spatial resolutions, and ocular resolutions. The outcome also includes the relationships between resolution and the pixel density of the displays and images. All cameras were successfully adapted to the slit-lamp microscopy, and high-quality ocular images were obtained. Differences in the display spatial resolutions were found among cameras [analysis of variance (ANOVA), Pcameras using the high-definition multimedia interface (HDMI) compared with others, including cameras in smart phones. The display resolutions of smart phone displays were greater than cameras with video graphics array displays. The display spatial resolutions were found as a function of display pixel density (r>0.95, P0.85, Pcameras (ANOVA, P0.98, P0.85, Pcamera yielded the highest image spatial resolution. However, the ocular resolution through binocular viewing of the slit-lamp microscopy was found to have the highest resolution compared with the display and image spatial resolutions of all of the cameras. Several cameras can be adapted with slit-lamp microscopy for ophthalmic imaging, yielding various display and image spatial resolutions. However, the resolution seemed to not be as good as ocular viewing through the slit-lamp biomicroscope.

  15. Visual odometry from omnidirectional camera

    OpenAIRE

    Jiří DIVIŠ

    2012-01-01

    We present a system that estimates the motion of a robot relying solely on images from onboard omnidirectional camera (visual odometry). Compared to other visual odometry hardware, ours is unusual in utilizing high resolution, low frame-rate (1 to 3 Hz) omnidirectional camera mounted on a robot that is propelled using continuous tracks. We focus on high precision estimates in scenes, where objects are far away from the camera. This is achieved by utilizing omnidirectional camera that is able ...

  16. Student-Built Underwater Video and Data Capturing Device

    Science.gov (United States)

    Whitt, F.

    2016-12-01

    Students from Stockbridge High School Robotics Team invention is a low cost underwater video and data capturing device. This system is capable of shooting time-lapse photography and/or video for up to 3 days of video at a time. It can be used in remote locations without having to change batteries or adding additional external hard drives for data storage. The video capturing device has a unique base and mounting system which houses a pi drive and a programmable raspberry pi with a camera module. This system is powered by two 12 volt batteries, which makes it easier for users to recharge after use. Our data capturing device has the same unique base and mounting system as the underwater camera. The data capturing device consists of an Arduino and SD card shield that is capable of collecting continuous temperature and pH readings underwater. This data will then be logged onto the SD card for easy access and recording. The low cost underwater video and data capturing device can reach depths up to 100 meters while recording 36 hours of video on 1 terabyte of storage. It also features night vision infrared light capabilities. The cost to build our invention is $500. The goal of this was to provide a device that can easily be accessed by marine biologists, teachers, researchers and citizen scientists to capture photographic and water quality data in marine environments over extended periods of time.

  17. Traffic camera system development

    Science.gov (United States)

    Hori, Toshi

    1997-04-01

    The intelligent transportation system has generated a strong need for the development of intelligent camera systems to meet the requirements of sophisticated applications, such as electronic toll collection (ETC), traffic violation detection and automatic parking lot control. In order to achieve the highest levels of accuracy in detection, these cameras must have high speed electronic shutters, high resolution, high frame rate, and communication capabilities. A progressive scan interline transfer CCD camera, with its high speed electronic shutter and resolution capabilities, provides the basic functions to meet the requirements of a traffic camera system. Unlike most industrial video imaging applications, traffic cameras must deal with harsh environmental conditions and an extremely wide range of light. Optical character recognition is a critical function of a modern traffic camera system, with detection and accuracy heavily dependent on the camera function. In order to operate under demanding conditions, communication and functional optimization is implemented to control cameras from a roadside computer. The camera operates with a shutter speed faster than 1/2000 sec. to capture highway traffic both day and night. Consequently camera gain, pedestal level, shutter speed and gamma functions are controlled by a look-up table containing various parameters based on environmental conditions, particularly lighting. Lighting conditions are studied carefully, to focus only on the critical license plate surface. A unique light sensor permits accurate reading under a variety of conditions, such as a sunny day, evening, twilight, storms, etc. These camera systems are being deployed successfully in major ETC projects throughout the world.

  18. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    OpenAIRE

    Steven Nicholas Graves, MA; Deana Saleh Shenaq, MD; Alexander J. Langerman, MD; David H. Song, MD, MBA, FACS

    2015-01-01

    Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used ...

  19. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    The ldquoatmosphere-space interactions monitorrdquo (ASIM) is a payload to be mounted on one of the external platforms of the Columbus module of the International Space Station (ISS). The instruments include six video cameras, six photometers and one X-ray detector. The main scientific objective...... of the mission is to study transient luminous events (TLE) above severe thunderstorms: the sprites, jets and elves. Other atmospheric phenomena are also studied including aurora, gravity waves and meteors. As part of the ASIM Phase B study, on-board processing of data from the cameras is being developed...

  20. Developing participatory research in radiology: the use of a graffiti wall, cameras and a video box in a Scottish radiology department.

    Science.gov (United States)

    Mathers, Sandra A; Anderson, Helen; McDonald, Sheila; Chesson, Rosemary A

    2010-03-01

    Participatory research is increasingly advocated for use in health and health services research and has been defined as a 'process of producing new knowledge by systematic enquiry, with the collaboration of those being studied'. The underlying philosophy of participatory research is that those recruited to studies are acknowledged as experts who are 'empowered to truly participate and have their voices heard'. Research methods should enable children to express themselves. This has led to the development of creative approaches of working with children that offer alternatives to, for instance, the structured questioning of children by researchers either through questionnaires or interviews. To examine the feasibility and potential of developing participatory methods in imaging research. We employed three innovative methods of data collection sequentially, namely the provision of: 1) a graffiti wall; 2) cameras, and 3) a video box for children's use. While the graffiti wall was open to all who attended the department, for the other two methods children were allocated to each 'arm' consecutively until our target of 20 children for each was met. The study demonstrated that it was feasible to use all three methods of data collection within the context of a busy radiology department. We encountered no complaints from staff, patients or parents. Children were willing to participate but we did not collect data to establish if they enjoyed the activities, were pleased to have the opportunity to make comments or whether anxieties about their treatment inhibited their participation. The data yield was disappointing. In particular, children's contributions to the graffiti wall were limited, but did reflect the nature of graffiti, and there may have been some 'copycat' comments. Although data analysis was relatively straightforward, given the nature of the data (short comments and simple drawings), the process proved to be extremely time-consuming. This was despite the modest

  1. Multi-spectral camera development

    CSIR Research Space (South Africa)

    Holloway, M

    2012-10-01

    Full Text Available ) ? 6 Spectral bands plus laser range finder ? High Definition (HD) video format ? Synchronised image capture ? Configurable mounts ? positioner and laboratory ? Radiometric and geometric calibration ? Fiber optic data transmission Proposed system...

  2. Hemispherical Laue camera

    Science.gov (United States)

    Li, James C. M.; Chu, Sungnee G.

    1980-01-01

    A hemispherical Laue camera comprises a crystal sample mount for positioning a sample to be analyzed at the center of sphere of a hemispherical, X-radiation sensitive film cassette, a collimator, a stationary or rotating sample mount and a set of standard spherical projection spheres. X-radiation generated from an external source is directed through the collimator to impinge onto the single crystal sample on the stationary mount. The diffracted beam is recorded on the hemispherical X-radiation sensitive film mounted inside the hemispherical film cassette in either transmission or back-reflection geometry. The distances travelled by X-radiation diffracted from the crystal to the hemispherical film are the same for all crystal planes which satisfy Bragg's Law. The recorded diffraction spots or Laue spots on the film thereby preserve both the symmetry information of the crystal structure and the relative intensities which are directly related to the relative structure factors of the crystal orientations. The diffraction pattern on the exposed film is compared with the known diffraction pattern on one of the standard spherical projection spheres for a specific crystal structure to determine the orientation of the crystal sample. By replacing the stationary sample support with a rotating sample mount, the hemispherical Laue camera can be used for crystal structure determination in a manner previously provided in conventional Debye-Scherrer cameras.

  3. Upgrades to NDSF Vehicle Camera Systems and Development of a Prototype System for Migrating and Archiving Video Data in the National Deep Submergence Facility Archives at WHOI

    Science.gov (United States)

    Fornari, D.; Howland, J.; Lerner, S.; Gegg, S.; Walden, B.; Bowen, A.; Lamont, M.; Kelley, D.

    2003-12-01

    In recent years, considerable effort has been made to improve the visual recording capabilities of Alvin and ROV Jason. This has culminated in the routine use of digital cameras, both internal and external on these vehicles, which has greatly expanded the scientific recording capabilities of the NDSF. The UNOLS National Deep Submergence Facility (NDSF) archives maintained at Woods Hole Oceanograpic Institution (WHOI) are the repository for the diverse suite of photographic still images (both 35mm and recently digital), video imagery, vehicle data and navigation, and near-bottom side-looking sonar data obtained by the facility vehicles. These data comprise a unique set of information from a wide range of seafloor environments over the more than 25 years of NDSF operations in support of science. Included in the holdings are Alvin data plus data from the tethered vehicles- ROV Jason, Argo II, and the DSL-120 side scan sonar. This information conservatively represents an outlay in facilities and science costs well in excess of \\$100 million. Several archive related improvement issues have become evident over the past few years. The most critical are: 1. migration and better access to the 35mm Alvin and Jason still images through digitization and proper cataloging with relevant meta-data, 2. assessing Alvin data logger data, migrating data on older media no longer in common use, and properly labeling and evaluating vehicle attitude and navigation data, 3. migrating older Alvin and Jason video data, especially data recorded on Hi-8 tape that is very susceptible to degradation on each replay, to newer digital format media such as DVD, 4. improving the capabilities of the NDSF archives to better serve the increasingly complex needs of the oceanographic community, including researchers involved in focused programs like Ridge2000 and MARGINS, where viable distributed databases in various disciplinary topics will form an important component of the data management structure

  4. INCREMENTAL REAL-TIME BUNDLE ADJUSTMENT FOR MULTI-CAMERA SYSTEMS WITH POINTS AT INFINITY

    Directory of Open Access Journals (Sweden)

    J. Schneider

    2013-08-01

    Full Text Available This paper presents a concept and first experiments on a keyframe-based incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. In order to avoid periodic batch steps, we use the software iSAM2 for sparse nonlinear incremental optimization, which is highly efficient through incremental variable reordering and fluid relinearization. We adapted the software to allow for (1 multi-view cameras by taking the rigid transformation between the cameras into account, (2 omnidirectional cameras as it can handle arbitrary bundles of rays and (3 scene points at infinity, which improve the estimation of the camera orientation as points at the horizon can be observed over long periods of time. The real-time bundle adjustment refers to sets of keyframes, consisting of frames, one per camera, taken in a synchronized way, that are initiated if a minimal geometric distance to the last keyframe set is exceeded. It uses interest points in the keyframes as observations, which are tracked in the synchronized video streams of the individual cameras and matched across the cameras, if possible. First experiments show the potential of the incremental bundle adjustment w.r.t. time requirements. Our experiments are based on a multi-camera system with four fisheye cameras, which are mounted on a UAV as two stereo pairs, one looking ahead and one looking backwards, providing a large field of view.

  5. Developing participatory research in radiology: the use of a graffiti wall, cameras and a video box in a Scottish radiology department

    Energy Technology Data Exchange (ETDEWEB)

    Mathers, Sandra A. [Aberdeen Royal Infirmary, Department of Radiology, Aberdeen (United Kingdom); The Robert Gordon University, Faculty of Health and Social Care, Aberdeen (United Kingdom); Anderson, Helen [Royal Aberdeen Children' s Hospital, Department of Radiology, Aberdeen (United Kingdom); McDonald, Sheila [Royal Aberdeen Children' s Hospital, Aberdeen (United Kingdom); Chesson, Rosemary A. [University of Aberdeen, School of Medicine and Dentistry, Aberdeen (United Kingdom)

    2010-03-15

    Participatory research is increasingly advocated for use in health and health services research and has been defined as a 'process of producing new knowledge by systematic enquiry, with the collaboration of those being studied'. The underlying philosophy of participatory research is that those recruited to studies are acknowledged as experts who are 'empowered to truly participate and have their voices heard'. Research methods should enable children to express themselves. This has led to the development of creative approaches of working with children that offer alternatives to, for instance, the structured questioning of children by researchers either through questionnaires or interviews. To examine the feasibility and potential of developing participatory methods in imaging research. We employed three innovative methods of data collection sequentially, namely the provision of: 1) a graffiti wall; 2) cameras, and 3) a video box for children's use. While the graffiti wall was open to all who attended the department, for the other two methods children were allocated to each 'arm' consecutively until our target of 20 children for each was met. The study demonstrated that it was feasible to use all three methods of data collection within the context of a busy radiology department. We encountered no complaints from staff, patients or parents. Children were willing to participate but we did not collect data to establish if they enjoyed the activities, were pleased to have the opportunity to make comments or whether anxieties about their treatment inhibited their participation. The data yield was disappointing. In particular, children's contributions to the graffiti wall were limited, but did reflect the nature of graffiti, and there may have been some 'copycat' comments. Although data analysis was relatively straightforward, given the nature of the data (short comments and simple drawings), the process proved to be

  6. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    Science.gov (United States)

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  7. Flip Video for Dummies

    CERN Document Server

    Hutsko, Joe

    2010-01-01

    The full-color guide to shooting great video with the Flip Video camera. The inexpensive Flip Video camera is currently one of the hottest must-have gadgets. It's portable and connects easily to any computer to transfer video you shoot onto your PC or Mac. Although the Flip Video camera comes with a quick-start guide, it lacks a how-to manual, and this full-color book fills that void! Packed with full-color screen shots throughout, Flip Video For Dummies shows you how to shoot the best possible footage in a variety of situations. You'll learn how to transfer video to your computer and then edi

  8. Tracing Sequential Video Production

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin; Khalid, Md. Saifuddin

    2015-01-01

    With an interest in learning that is set in collaborative situations, the data session presents excerpts from video data produced by two of fifteen students from a class of 5th semester techno-anthropology course. Students used video cameras to capture the time they spent working with a scientist...... video, nature of the interactional space, and material and spatial semiotics....

  9. Make a Pinhole Camera

    Science.gov (United States)

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  10. Mars Science Laboratory Engineering Cameras

    Science.gov (United States)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  11. Fazendo 3d com uma camera so

    CERN Document Server

    Lunazzi, J J

    2010-01-01

    A simple system to make stereo photography or videos based in just two mirrors was made in 1989 and recently adapted to a digital camera setup. Um sistema simples para fazer fotografia ou videos em estereo baseado em dois espelhos que dividem o campo da imagem foi criado no ano 1989, e recentemente adaptado para camera digital.

  12. Person re-identification across aerial and ground-based cameras by deep feature fusion

    Science.gov (United States)

    Schumann, Arne; Metzler, Jürgen

    2017-05-01

    Person re-identification is the task of correctly matching visual appearances of the same person in image or video data while distinguishing appearances of different persons. The traditional setup for re-identification is a network of fixed cameras. However, in recent years mobile aerial cameras mounted on unmanned aerial vehicles (UAV) have become increasingly useful for security and surveillance tasks. Aerial data has many characteristics different from typical camera network data. Thus, re-identification approaches designed for a camera network scenario can be expected to suffer a drop in accuracy when applied to aerial data. In this work, we investigate the suitability of features, which were shown to give robust results for re- identification in camera networks, for the task of re-identifying persons between a camera network and a mobile aerial camera. Specifically, we apply hand-crafted region covariance features and features extracted by convolu- tional neural networks which were learned on separate data. We evaluate their suitability for this new and as yet unexplored scenario. We investigate common fusion methods to combine the hand-crafted and learned features and propose our own deep fusion approach which is already applied during training of the deep network. We evaluate features and fusion methods on our own dataset. The dataset consists of fourteen people moving through a scene recorded by four fixed ground-based cameras and one mobile camera mounted on a small UAV. We discuss strengths and weaknesses of the features in the new scenario and show that our fusion approach successfully leverages the strengths of each feature and outperforms all single features significantly.

  13. Measurement and protocol for evaluating video and still stabilization systems

    Science.gov (United States)

    Cormier, Etienne; Cao, Frédéric; Guichard, Frédéric; Viard, Clément

    2013-01-01

    This article presents a system and a protocol to characterize image stabilization systems both for still images and videos. It uses a six axes platform, three being used for camera rotation and three for camera positioning. The platform is programmable and can reproduce complex motions that have been typically recorded by a gyroscope mounted on different types of cameras in different use cases. The measurement uses a single chart for still image and videos, the texture dead leaves chart. Although the proposed implementation of the protocol uses a motion platform, the measurement itself does not rely on any specific hardware. For still images, a modulation transfer function is measured in different directions and is weighted by a contrast sensitivity function (simulating the human visual system accuracy) to obtain an acutance. The sharpness improvement due to the image stabilization system is a good measurement of performance as recommended by a CIPA standard draft. For video, four markers on the chart are detected with sub-pixel accuracy to determine a homographic deformation between the current frame and a reference position. This model describes well the apparent global motion as translations, but also rotations along the optical axis and distortion due to the electronic rolling shutter equipping most CMOS sensors. The protocol is applied to all types of cameras such as DSC, DSLR and smartphones.

  14. Teaching residents pediatric fiberoptic intubation of the trachea: traditional fiberscope with an eyepiece versus a video-assisted technique using a fiberscope with an integrated camera.

    Science.gov (United States)

    Wheeler, Melissa; Roth, Andrew G; Dsida, Richard M; Rae, Bronwyn; Seshadri, Roopa; Sullivan, Christine L; Heffner, Corri L; Coté, Charles J

    2004-10-01

    The authors' hypothesis was that a video-assisted technique should speed resident skill acquisition for flexible fiberoptic oral tracheal intubation (FI) of pediatric patients because the attending anesthesiologist can provide targeted instruction when sharing the view of the airway as the resident attempts intubation. Twenty Clinical Anesthesia year 2 residents, novices in pediatric FI, were randomly assigned to either the traditional group (traditional eyepiece FI) or the video group (video-assisted FI). One of two attending anesthesiologists supervised each resident during FI of 15 healthy children, aged 1-6 yr. The time from mask removal to confirmation of endotracheal tube placement by end-tidal carbon dioxide detection was recorded. Intubation attempts were limited to 3 min; up to three attempts were allowed. The primary outcome measure, time to success or failure, was compared between groups. Failure rate and number of attempts were also compared between groups. Three hundred patient intubations were attempted; eight failed. On average, the residents in the video group were faster, were three times more likely to successfully intubate at any given time during an attempt, and required fewer attempts per patient compared to those in the traditional group. The video system seems to be superior for teaching residents fiberoptic intubation in children.

  15. Stereoscopic video and the quest for virtual reality: an annotated bibliography of selected topics

    Science.gov (United States)

    Starks, Michael R.

    1991-08-01

    It is the aim of most work with graphics photography and video to create vivid and compelling images. Though telepresence and virtual reality are new terms there exists a vast body of research in stereoscopic vido and other fields with similar objectives. Most of this work has appeared in patent documents or obscure publications and is rarely cited. In order to stimulate research a number of areas of interest are briefly reviewed and accompanied by extensive bibliographies. These include single camera and dual camera stereoscopy cornpatible 3D recording and transmission helmet mounted displays field sequential stereo and head and eyetracking devices.

  16. SEFIS Video Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is a fishery-independent survey that collects data on reef fish in southeast US waters using multiple gears, including chevron traps, video cameras, ROVs,...

  17. A new video studio for CERN

    CERN Multimedia

    Anaïs Vernede

    2011-01-01

    On Monday, 14 February 2011 CERN's new video studio was inaugurated with a recording of "Spotlight on CERN", featuring an interview with the DG, Rolf Heuer.   CERN's new video studio. Almost all international organisations have a studio for their audiovisual communications, and now it's CERN’s turn to acquire such a facility. “In the past, we've made videos using the Globe audiovisual facilities and sometimes using the small photographic studio, which is equipped with simple temporary sets that aren’t really suitable for video,” explains Jacques Fichet, head of CERN‘s audiovisual service. Once the decision had been taken to create the new 100 square-metre video studio, the work took only five months to complete. The studio, located in Building 510, is equipped with a cyclorama (a continuous smooth white wall used as a background) measuring 3 m in height and 16 m in length, as well as a teleprompter, a rail-mounted camera dolly fo...

  18. Coordinated Sensing in Intelligent Camera Networks

    OpenAIRE

    Ding, Chong

    2013-01-01

    The cost and size of video sensors has led to camera networks becoming pervasive in our lives. However, the ability to analyze these images efficiently is very much a function of the quality of the acquired images. Human control of pan-tilt-zoom (PTZ) cameras is impractical and unreliable when high quality images are needed of multiple events distributed over a large area. This dissertation considers the problem of automatically controlling the fields of view of individual cameras in a camera...

  19. Multitask Imaging Monitor for Surgical Navigation: Combination of Touchless Interface and Head-Mounted Display.

    Science.gov (United States)

    Yoshida, Soichiro; Ito, Masaya; Tatokoro, Manabu; Yokoyama, Minato; Ishioka, Junichiro; Matsuoka, Yoh; Numao, Noboru; Saito, Kazutaka; Fujii, Yasuhisa; Kihara, Kazunori

    2017-01-01

    As a result of the dramatic improvements in the resolution, wearability, and weight of head-mounted displays (HMDs), they have become increasingly applied in the medical field as personal imaging monitors. The combined use of a multiplexer with an HMD allows the wearer to simultaneously and seamlessly monitor multiple streams of imaging information through the HMD. We developed a multitask imaging monitor for surgical navigation by combining a touchless surgical imaging control system with an HMD. This system is composed of a standard color digital video camera mounted on the HMD and computer software that enables the identification of the number of pictured fingertips from the video camera image. The HMD wearer uses this information as a touchless interface for the operating multiplexer, which can control the arrays and types of imaging information displayed on the HMD. We used this system in an experimental demonstration during a single-port gasless partial nephrectomy. The use of this multitask imaging monitor using a touchless interface would refine the surgical workflow, especially during surgical navigation. © 2015 S. Karger AG, Basel.

  20. Dynamic Camera Positioning and Reconfiguration for Multi-Camera Networks

    OpenAIRE

    Konda, Krishna Reddy

    2015-01-01

    The large availability of different types of cameras and lenses, together with the reduction in price of video sensors, has contributed to a widespread use of video surveillance systems, which have become a widely adopted tool to enforce security and safety, in detecting and preventing crimes and dangerous events. The possibility for personalization of such systems is generally very high, letting the user customize the sensing infrastructure, and deploying ad-hoc solutions based on the curren...

  1. Driving with head-slaved camera system

    NARCIS (Netherlands)

    Oving, A.B.; Erp, J.B.F. van

    2001-01-01

    In a field experiment, we tested the effectiveness of a head-slaved camera system for driving an armoured vehicle under armour. This system consists of a helmet-mounted display (HMD), a headtracker, and a motion platform with two cameras. Subjects performed several driving tasks on paved and in

  2. Vaginitis test - wet mount

    Science.gov (United States)

    Wet prep - vaginitis; Vaginosis - wet mount; Trichomoniasis - wet mount; Vaginal candida - wet mount ... a rash, painful intercourse, or odor after intercourse. Trichomoniasis , a sexually transmitted disease. Vaginal yeast infection .

  3. An Innovative Streaming Video System With a Point-of-View Head Camera Transmission of Surgeries to Smartphones and Tablets: An Educational Utility.

    Science.gov (United States)

    Chaves, Rafael Oliveira; de Oliveira, Pedro Armando Valente; Rocha, Luciano Chaves; David, Joacy Pedro Franco; Ferreira, Sanmari Costa; Santos, Alex de Assis Santos Dos; Melo, Rômulo Müller Dos Santos; Yasojima, Edson Yuzur; Brito, Marcus Vinicius Henriques

    2017-10-01

    In order to engage medical students and residents from public health centers to utilize the telemedicine features of surgery on their own smartphones and tablets as an educational tool, an innovative streaming system was developed with the purpose of streaming live footage from open surgeries to smartphones and tablets, allowing the visualization of the surgical field from the surgeon's perspective. The current study aims to describe the results of an evaluation on level 1 of Kirkpatrick's Model for Evaluation of the streaming system usage during gynecological surgeries, based on the perception of medical students and gynecology residents. Consisted of a live video streaming (from the surgeon's point of view) of gynecological surgeries for smartphones and tablets, one for each volunteer. The volunteers were able to connect to the local wireless network, created by the streaming system, through an access password and watch the video transmission on a web browser on their smartphones. Then, they answered a Likert-type questionnaire containing 14 items about the educational applicability of the streaming system, as well as comparing it to watching an in loco procedure. This study is formally approved by the local ethics commission (Certificate No. 53175915.7.0000.5171/2016). Twenty-one volunteers participated, totalizing 294 items answered, in which 94.2% were in agreement with the items affirmative, 4.1% were neutral, and only 1.7% answers corresponded to negative impressions. Cronbach's α was .82, which represents a good reliability level. Spearman's coefficients were highly significant in 4 comparisons and moderately significant in the other 20 comparisons. This study presents a local streaming video system of live surgeries to smartphones and tablets and shows its educational utility, low cost, and simple usage, which offers convenience and satisfactory image resolution, thus being potentially applicable in surgical teaching.

  4. POLICE BODY CAMERAS: SEEING MAY BE BELIEVING

    Directory of Open Access Journals (Sweden)

    Noel Otu

    2016-11-01

    Full Text Available While the concept of body-mounted cameras (BMC worn by police officers is a controversial issue, it is not new. Since in the early-2000s, police departments across the United States, England, Brazil, and Australia have been implementing wearable cameras. Like all devices used in policing, body-mounted cameras can create a sense of increased power, but also additional responsibilities for both the agencies and individual officers. This paper examines the public debate regarding body-mounted cameras. The conclusions drawn show that while these devices can provide information about incidents relating to police–citizen encounters, and can deter citizen and police misbehavior, these devices can also violate a citizen’s privacy rights. This paper outlines several ramifications for practice as well as implications for policy.

  5. System of video observation for electron beam welding process

    Science.gov (United States)

    Laptenok, V. D.; Seregin, Y. N.; Bocharov, A. N.; Murygin, A. V.; Tynchenko, V. S.

    2016-04-01

    Equipment of video observation system for electron beam welding process was developed. Construction of video observation system allows to reduce negative effects on video camera during the process of electron beam welding and get qualitative images of this process.

  6. SXI prototype mirror mount

    Science.gov (United States)

    1995-04-01

    The purpose of this contract was to provide optomechanical engineering and fabrication support to the Solar X-ray Imager (SXI) program in the areas of mirror, optical bench and camera assemblies of the telescope. The Center for Applied Optics (CAO) worked closely with the Optics and S&E technical staff of MSFC to develop and investigate the most viable and economical options for the design and fabrication of a number of parts for the various telescope assemblies. All the tasks under this delivery order have been successfully completed within budget and schedule. A number of development hardware parts have been designed and fabricated jointly by MSFC and UAH for the engineering model of SXI. The major parts include a nickel electroformed mirror and a mirror mount, plating and coating of the ceramic spacers, and gold plating of the contact rings and fingers for the camera assembly. An aluminum model of the high accuracy sun sensor (HASS) was also designed and fabricated. A number of fiber optic tapers for the camera assembly were also coated with indium tin oxide and phosphor for testing and evaluation by MSFC. A large number of the SXI optical bench parts were also redesigned and simplified for a prototype telescope. These parts include the forward and rear support flanges, front aperture plate, the graphite epoxy optical bench and a test fixture for the prototype telescope. More than fifty (50) drawings were generated for various components of the prototype telescope. Some of these parts were subsequently fabricated at UAH machine shop or at MSFC or by the outside contractors. UAH also provide technical support to MSFC staff for a number of preliminary and critical design reviews. These design reviews included PDR and CDR for the mirror assembly by United Technologies Optical Systems (UTOS), and the program quarterly reviews, and SXI PDR and CDR. UAH staff also regularly attended the monthly status reviews, and made a significant number of suggestions to improve

  7. SXI prototype mirror mount

    Science.gov (United States)

    1995-01-01

    The purpose of this contract was to provide optomechanical engineering and fabrication support to the Solar X-ray Imager (SXI) program in the areas of mirror, optical bench and camera assemblies of the telescope. The Center for Applied Optics (CAO) worked closely with the Optics and S&E technical staff of MSFC to develop and investigate the most viable and economical options for the design and fabrication of a number of parts for the various telescope assemblies. All the tasks under this delivery order have been successfully completed within budget and schedule. A number of development hardware parts have been designed and fabricated jointly by MSFC and UAH for the engineering model of SXI. The major parts include a nickel electroformed mirror and a mirror mount, plating and coating of the ceramic spacers, and gold plating of the contact rings and fingers for the camera assembly. An aluminum model of the high accuracy sun sensor (HASS) was also designed and fabricated. A number of fiber optic tapers for the camera assembly were also coated with indium tin oxide and phosphor for testing and evaluation by MSFC. A large number of the SXI optical bench parts were also redesigned and simplified for a prototype telescope. These parts include the forward and rear support flanges, front aperture plate, the graphite epoxy optical bench and a test fixture for the prototype telescope. More than fifty (50) drawings were generated for various components of the prototype telescope. Some of these parts were subsequently fabricated at UAH machine shop or at MSFC or by the outside contractors. UAH also provide technical support to MSFC staff for a number of preliminary and critical design reviews. These design reviews included PDR and CDR for the mirror assembly by United Technologies Optical Systems (UTOS), and the program quarterly reviews, and SXI PDR and CDR. UAH staff also regularly attended the monthly status reviews, and made a significant number of suggestions to improve

  8. High Speed Digital Camera Technology Review

    Science.gov (United States)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  9. Active spectral imaging nondestructive evaluation (SINDE) camera

    Energy Technology Data Exchange (ETDEWEB)

    Simova, E.; Rochefort, P.A., E-mail: eli.simova@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)

    2016-06-15

    A proof-of-concept video camera for active spectral imaging nondestructive evaluation has been demonstrated. An active multispectral imaging technique has been implemented in the visible and near infrared by using light emitting diodes with wavelengths spanning from 400 to 970 nm. This shows how the camera can be used in nondestructive evaluation to inspect surfaces and spectrally identify materials and corrosion. (author)

  10. Cameras Monitor Spacecraft Integrity to Prevent Failures

    Science.gov (United States)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  11. Design, development, and performance of the STEREO SECCHI CCD cameras

    Science.gov (United States)

    Waltham, Nick; Eyles, Chris

    2007-09-01

    We report the design, development and performance of the SECCHI (Sun Earth Connection Coronal and Heliospheric Investigation) CCD camera electronics on NASA's Solar Terrestrial Relations Observatory (STEREO). STEREO consists of two nearly identical space-based observatories; one ahead of Earth in its orbit, the other trailing behind to provide the first-ever stereoscopic (3D) measurements to study the Sun and the nature of its coronal mass ejections. The SECCHI instrument suite consists of five telescopes that will observe the solar corona, and inner heliosphere all the way from the surface of the Sun to the orbit of the Earth, and beyond. Each telescope contains a large-format science-grade CCD; two within the Heliospheric Imager (HI) instrument, and three in a separate instrument package (SCIP) consisting of two coronagraphs and an EUV imager. The CCDs are operated from two Camera Electronics Boxes. Constraints on the size, mass, and power available for the camera electronics required the development of a miniaturised solution employing digital and mixed-signal ASICs, FPGAs, and compact surface-mount construction. Operating more than one CCD from a single box also provides economy on the number of DC-DC converters and interface electronics required. We describe the requirements for the overall design and implementation, and in particular the design and performance of the camera's space-saving mixed-signal CCD video processing ASIC. The performance of the camera is reviewed together with sample images obtained since the STEREO mission was successfully launched on October 25 2006 from Cape Canaveral.

  12. The Mars Science Laboratory Engineering Cameras

    Science.gov (United States)

    Maki, J.; Thiessen, D.; Pourangi, A.; Kobzeff, P.; Litwin, T.; Scherr, L.; Elliott, S.; Dingizian, A.; Maimone, M.

    2012-09-01

    NASA's Mars Science Laboratory (MSL) Rover is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover cameras described in Maki et al. (J. Geophys. Res. 108(E12): 8071, 2003). Images returned from the engineering cameras will be used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The Navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The Hazard Avoidance Cameras (Hazcams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a 1024×1024 pixel detector and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer "A" and the other set is connected to rover computer "B". The Navcams and Front Hazcams each provide similar views from either computer. The Rear Hazcams provide different views from the two computers due to the different mounting locations of the "A" and "B" Rear Hazcams. This paper provides a brief description of the engineering camera properties, the locations of the cameras on the vehicle, and camera usage for surface operations.

  13. 2011 Tohoku tsunami video and TLS based measurements: hydrographs, currents, inundation flow velocities, and ship tracks

    Science.gov (United States)

    Fritz, H. M.; Phillips, D. A.; Okayasu, A.; Shimozono, T.; Liu, H.; Takeda, S.; Mohammed, F.; Skanavis, V.; Synolakis, C. E.; Takahashi, T.

    2012-12-01

    The March 11, 2011, magnitude Mw 9.0 earthquake off the coast of the Tohoku region caused catastrophic damage and loss of life in Japan. The mid-afternoon tsunami arrival combined with survivors equipped with cameras on top of vertical evacuation buildings provided spontaneous spatially and temporally resolved inundation recordings. This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Myako, Kamaishi, Kesennuma and Yoriisohama along Japan's Sanriku coast and the subsequent video image calibration, processing, tsunami hydrograph and flow velocity analysis. Selected tsunami video recording sites were explored, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance in April, 2011. A follow-up survey in June, 2011 focused on terrestrial laser scanning (TLS) at locations with high quality eyewitness videos. We acquired precise topographic data using TLS at the video sites producing a 3-dimensional "point cloud" dataset. A camera mounted on the Riegl VZ-400 scanner yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing. The original video recordings were recovered from eyewitnesses and the Japanese Coast Guard (JCG). The analysis of the tsunami videos follows an adapted four step procedure originally developed for the analysis of 2004 Indian Ocean tsunami videos at Banda Aceh, Indonesia (Fritz et al., 2006). The first step requires the calibration of the sector of view present in the eyewitness video recording based on ground control points measured in the LiDAR data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent images by particle image velocimetry (PIV) applied to fixed objects. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates with a direct linear transformation (DLT) procedure. Finally, the instantaneous tsunami

  14. Feasibility of Radon projection acquisition for compressive imaging in MMW region based new video rate 16×16 GDD FPA camera

    Science.gov (United States)

    Levanon, Assaf; Konstantinovsky, Michael; Kopeika, Natan S.; Yitzhaky, Yitzhak; Stern, A.; Turak, Svetlana; Abramovich, Amir

    2015-05-01

    In this article we present preliminary results for the combination of two interesting fields in the last few years: 1) Compressed imaging (CI), which is a joint sensing and compressing process, that attempts to exploit the large redundancy in typical images in order to capture fewer samples than usual. 2) Millimeter Waves (MMW) imaging. MMW based imaging systems are required for a large variety of applications in many growing fields such as medical treatments, homeland security, concealed weapon detection, and space technology. Moreover, the possibility to create a reliable imaging in low visibility conditions such as heavy cloud, smoke, fog and sandstorms in the MMW region, generate high interest from military groups in order to be ready for new combat. The lack of inexpensive room temperature imaging sensors makes it difficult to provide a suitable MMW system for many of the above applications. A system based on Glow Discharge Detector (GDD) Focal Plane Arrays (FPA) can be very efficient in real time imaging with significant results. The GDD is located in free space and it can detect MMW radiation almost isotropically. In this article, we present a new approach of reconstruction MMW imaging by rotation scanning of the target. The Collection process here, based on Radon projections allows implementation of the compressive sensing principles into the MMW region. Feasibility of concept was obtained as radon line imaging results. MMW imaging results with our resent sensor are also presented for the first time. The multiplexing frame rate of 16×16 GDD FPA permits real time video rate imaging of 30 frames per second and comprehensive 3D MMW imaging. It uses commercial GDD lamps with 3mm diameter, Ne indicator lamps as pixel detectors. Combination of these two fields should make significant improvement in MMW region imaging research, and new various of possibilities in compressing sensing technique.

  15. INT prime focus mosaic camera

    Science.gov (United States)

    Ives, Derek J.; Tulloch, Simon; Churchill, John

    1996-03-01

    The INT Prime Focus Mosaic Camera (INT PFC) is designed to provide a large field survey and supernovae search capability for the prime focus of the 2.5 m Isaac Newton Telescope (INT). It is a joint collaboration between the Royal Greenwich Observatory (UK), Kapteyn Sterrenwacht Werkgroep (Netherlands), and the Lawrence Berkeley Laboratories (USA). The INT PFC consists of a 4 chip mosaic utilizing thinned and anti-reflection coated CCDs. These are LORAL devices of the LICK3 design. They will be operated cryogenically in a purpose built camera assembly. A fifth CCD, of the same type, is co-mounted with the science array in the cryostat to provide autoguider functions. This cryostat then mounts to the main camera assembly at the prime focus. This assembly will include standard filters and a novel shutter wheel which has been specifically designed for this application. The camera will have an unvignetted field of 40 arcminutes and a focal ratio of f/3.3. This results in a very tight mechanical specification for co-planarity and flatness of the array of CCDs and also quite stringent flexure tolerance of the camera assembly. A method of characterizing the co- planarity and flatness of the array will be described. The overall system architecture will also be described. One of the main requirements is to read the whole array out within 100s, with less than 10e rms. noise and very low CCD cross talk.

  16. Infrared Camera

    Science.gov (United States)

    1997-01-01

    A sensitive infrared camera that observes the blazing plumes from the Space Shuttle or expendable rocket lift-offs is capable of scanning for fires, monitoring the environment and providing medical imaging. The hand-held camera uses highly sensitive arrays in infrared photodetectors known as quantum well infrared photo detectors (QWIPS). QWIPS were developed by the Jet Propulsion Laboratory's Center for Space Microelectronics Technology in partnership with Amber, a Raytheon company. In October 1996, QWIP detectors pointed out hot spots of the destructive fires speeding through Malibu, California. Night vision, early warning systems, navigation, flight control systems, weather monitoring, security and surveillance are among the duties for which the camera is suited. Medical applications are also expected.

  17. Reduction in Fall Rate in Dementia Managed Care Through Video Incident Review: Pilot Study.

    Science.gov (United States)

    Bayen, Eleonore; Jacquemot, Julien; Netscher, George; Agrawal, Pulkit; Tabb Noyce, Lynn; Bayen, Alexandre

    2017-10-17

    Falls of individuals with dementia are frequent, dangerous, and costly. Early detection and access to the history of a fall is crucial for efficient care and secondary prevention in cognitively impaired individuals. However, most falls remain unwitnessed events. Furthermore, understanding why and how a fall occurred is a challenge. Video capture and secure transmission of real-world falls thus stands as a promising assistive tool. The objective of this study was to analyze how continuous video monitoring and review of falls of individuals with dementia can support better quality of care. A pilot observational study (July-September 2016) was carried out in a Californian memory care facility. Falls were video-captured (24×7), thanks to 43 wall-mounted cameras (deployed in all common areas and in 10 out of 40 private bedrooms of consenting residents and families). Video review was provided to facility staff, thanks to a customized mobile device app. The outcome measures were the count of residents' falls happening in the video-covered areas, the acceptability of video recording, the analysis of video review, and video replay possibilities for care practice. Over 3 months, 16 falls were video-captured. A drop in fall rate was observed in the last month of the study. Acceptability was good. Video review enabled screening for the severity of falls and fall-related injuries. Video replay enabled identifying cognitive-behavioral deficiencies and environmental circumstances contributing to the fall. This allowed for secondary prevention in high-risk multi-faller individuals and for updated facility care policies regarding a safer living environment for all residents. Video monitoring offers high potential to support conventional care in memory care facilities.

  18. CCD Camera

    Science.gov (United States)

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  19. Visual analysis of trash bin processing on garbage trucks in low resolution video

    Science.gov (United States)

    Sidla, Oliver; Loibner, Gernot

    2015-03-01

    We present a system for trash can detection and counting from a camera which is mounted on a garbage collection truck. A working prototype has been successfully implemented and tested with several hours of real-world video. The detection pipeline consists of HOG detectors for two trash can sizes, and meanshift tracking and low level image processing for the analysis of the garbage disposal process. Considering the harsh environment and unfavorable imaging conditions, the process works already good enough so that very useful measurements from video data can be extracted. The false positive/false negative rate of the full processing pipeline is about 5-6% at fully automatic operation. Video data of a full day (about 8 hrs) can be processed in about 30 minutes on a standard PC.

  20. A study on the sensitivity of photogrammetric camera calibration and stitching

    CSIR Research Space (South Africa)

    De

    2014-11-01

    Full Text Available This paper presents a detailed simulation study of an automated robotic photogrammetric camera calibration system. The system performance was tested for sensitivity with regard to noise in the robot movement, camera mounting and image processing...

  1. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  2. Deep-Sky Video Astronomy

    CERN Document Server

    Massey, Steve

    2009-01-01

    A guide to using modern integrating video cameras for deep-sky viewing and imaging with the kinds of modest telescopes available commercially to amateur astronomers. It includes an introduction and a brief history of the technology and camera types. It examines the pros and cons of this unrefrigerated yet highly efficient technology

  3. Online camera-gyroscope autocalibration for cell phones.

    Science.gov (United States)

    Jia, Chao; Evans, Brian L

    2014-12-01

    The gyroscope is playing a key role in helping estimate 3D camera rotation for various vision applications on cell phones, including video stabilization and feature tracking. Successful fusion of gyroscope and camera data requires that the camera, gyroscope, and their relative pose to be calibrated. In addition, the timestamps of gyroscope readings and video frames are usually not well synchronized. Previous paper performed camera-gyroscope calibration and synchronization offline after the entire video sequence has been captured with restrictions on the camera motion, which is unnecessarily restrictive for everyday users to run apps that directly use the gyroscope. In this paper, we propose an online method that estimates all the necessary parameters, whereas a user is capturing video. Our contributions are: 1) simultaneous online camera self-calibration and camera-gyroscope calibration based on an implicit extended Kalman filter and 2) generalization of the multiple-view coplanarity constraint on camera rotation in a rolling shutter camera model for cell phones. The proposed method is able to estimate the needed calibration and synchronization parameters online with all kinds of camera motion and can be embedded in gyro-aided applications, such as video stabilization and feature tracking. Both Monte Carlo simulation and cell phone experiments show that the proposed online calibration and synchronization method converge fast to the ground truth values.

  4. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  5. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  6. Digital Video Teach Yourself VISUALLY

    CERN Document Server

    Watson, Lonzell

    2010-01-01

    Tips and techniques for shooting and sharing superb digital videos. Never before has video been more popular-or more accessible to the home photographer. Now you can create YouTube-worthy, professional-looking video, with the help of this richly illustrated guide. In a straightforward, simple, highly visual format, Teach Yourself VISUALLY Digital Video demystifies the secrets of great video. With colorful screenshots and illustrations plus step-by-step instructions, the book explains the features of your camera and their capabilities, and shows you how to go beyond "auto" to manually

  7. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  8. Infrared vision techniques in quality control of surface-mount circuit board solder paste printing

    Science.gov (United States)

    Alander, Jarmo T.; Huusko, Mikko; Karonen, Aimo; Kuusrainen, Jari; Unonius, Lars

    1995-01-01

    In this work we have applied infrared camera techniques in a prototype of a quality control system for surface mount circuit board solder paste printing. The prototype system consists of a stepper motor controlled conveyor for board transportation and indexing, an infrared camera for paste pad temperature profile recording, a CCD camera for board and pad registration and recording, a pulse heating set-up, a video frame grabber and signal processor unit for preliminary image processing, and a PC for operator control, high level autonomous control and processing of preprocessed infrared and visual image data and communications with the other shop floor information and quality control systems. The operator interface is built on top of Windows 3.1, which makes it easy to operate and to connect to other programs at will. The prototype system was capable to process the locations and areas at over 100 solder paste pads per second speed and to evaluate the volumes of the pads within error tolerance of approximately equals 20%. The most severe obstacle in applying IR techniques in SMT product lines seems to be the current high cost of suitable IR scanning devices. Only slightly modified, the developed infrared quality control and testing system prototype can be used also in other electronics assembly line applications like solder checking and functional checking of boards by monitoring the thermal properties of solders and components correspondingly.

  9. Efficient stereo image geometrical reconstruction at arbitrary camera settings from a single calibration.

    Science.gov (United States)

    Ji, Songbai; Fan, Xiaoyao; Roberts, David W; Paulsen, Keith D

    2014-01-01

    Camera calibration is central to obtaining a quantitative image-to-physical-space mapping from stereo images acquired in the operating room (OR). A practical challenge for cameras mounted to the operating microscope is maintenance of image calibration as the surgeon's field-of-view is repeatedly changed (in terms of zoom and focal settings) throughout a procedure. Here, we present an efficient method for sustaining a quantitative image-to-physical space relationship for arbitrary image acquisition settings (S) without the need for camera re-calibration. Essentially, we warp images acquired at S into the equivalent data acquired at a reference setting, S(0), using deformation fields obtained with optical flow by successively imaging a simple phantom. Closed-form expressions for the distortions were derived from which 3D surface reconstruction was performed based on the single calibration at S(0). The accuracy of the reconstructed surface was 1.05 mm and 0.59 mm along and perpendicular to the optical axis of the operating microscope on average, respectively, for six phantom image pairs, and was 1.26 mm and 0.71 mm for images acquired with a total of 47 arbitrary settings during three clinical cases. The technique is presented in the context of stereovision; however, it may also be applicable to other types of video image acquisitions (e.g., endoscope) because it does not rely on any a priori knowledge about the camera system itself, suggesting the method is likely of considerable significance.

  10. Automated safety control by video cameras

    NARCIS (Netherlands)

    Lefter, I.; Rothkrantz, L.; Somhorst, M.

    2012-01-01

    At this moment many surveillance systems are installed in public domains to control the safety of people and properties. They are constantly watched by human operators who are easily overloaded. To support the human operators, a surveillance system model is designed that detects suspicious behaviour

  11. Stationary Stereo-Video Camera Stations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Accurate and precise stock assessments are predicated on accurate and precise estimates of life history parameters, abundance, and catch across the range of the...

  12. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  13. TRAFFIC SIGN RECOGNATION WITH VIDEO PROCESSING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Musa AYDIN

    2013-01-01

    Full Text Available In this study, traffic signs are aimed to be recognized and identified from a video image which is taken through a video camera. To accomplish our aim, a traffic sign recognition program has been developed in MATLAB/Simulink environment. The target traffic sign are recognized in the video image with the developed program.

  14. Development of a camera casing suited for cryogenic and vacuum applications

    Science.gov (United States)

    Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.

    2013-12-01

    We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

  15. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  16. Calibration of Action Cameras for Photogrammetric Purposes

    Directory of Open Access Journals (Sweden)

    Caterina Balletti

    2014-09-01

    Full Text Available The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a easy to handle, (b capable of performing under extreme conditions and more importantly (c able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  17. NFC - Narrow Field Camera

    Science.gov (United States)

    Koukal, J.; Srba, J.; Gorková, S.

    2015-01-01

    We have been introducing a low-cost CCTV video system for faint meteor monitoring and here we describe the first results from 5 months of two-station operations. Our system called NFC (Narrow Field Camera) with a meteor limiting magnitude around +6.5mag allows research on trajectories of less massive meteoroids within individual parent meteor showers and the sporadic background. At present 4 stations (2 pairs with coordinated fields of view) of NFC system are operated in the frame of CEMeNt (Central European Meteor Network). The heart of each NFC station is a sensitive CCTV camera Watec 902 H2 and a fast cinematographic lens Meopta Meostigmat 1/50 - 52.5 mm (50 mm focal length and fixed aperture f/1.0). In this paper we present the first results based on 1595 individual meteors, 368 of which were recorded from two stations simultaneously. This data set allows the first empirical verification of theoretical assumptions for NFC system capabilities (stellar and meteor magnitude limit, meteor apparent brightness distribution and accuracy of single station measurements) and the first low mass meteoroid trajectory calculations. Our experimental data clearly showed the capabilities of the proposed system for low mass meteor registration and for calculations based on NFC data to lead to a significant refinement in the orbital elements for low mass meteoroids.

  18. A combined stereo-photogrammetry and underwater-video system to study group composition of dolphins

    Science.gov (United States)

    Bräger, S.; Chong, A.; Dawson, S.; Slooten, E.; Würsig, B.

    1999-11-01

    One reason for the paucity of knowledge of dolphin social structure is the difficulty of measuring individual dolphins. In Hector's dolphins, Cephalorhynchus hectori, total body length is a function of age, and sex can be determined by individual colouration pattern. We developed a novel system combining stereo-photogrammetry and underwater-video to record dolphin group composition. The system consists of two downward-looking single-lens-reflex (SLR) cameras and a Hi8 video camera in an underwater housing mounted on a small boat. Bow-riding Hector's dolphins were photographed and video-taped at close range in coastal waters around the South Island of New Zealand. Three-dimensional, stereoscopic measurements of the distance between the blowhole and the anterior margin of the dorsal fin (BH-DF) were calibrated by a suspended frame with reference points. Growth functions derived from measurements of 53 dead Hector's dolphins (29 female : 24 male) provided the necessary reference data. For the analysis, the measurements were synchronised with corresponding underwater-video of the genital area. A total of 27 successful measurements (8 with corresponding sex) were obtained, showing how this new system promises to be potentially useful for cetacean studies.

  19. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  20. Ground Validation Drop Camera Transect Points - St. Thomas/ St. John USVI - 2011

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video were collected between...

  1. Ground Validation Drop Camera Transect Points - St. Thomas/ St. John USVI - 2011 (NCEI Accession 0131858)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video were collected between...

  2. EgoSampling: Wide View Hyperlapse from Egocentric Videos

    OpenAIRE

    Halperin, Tavi; Poleg, Yair; Arora, Chetan; Peleg, Shmuel

    2016-01-01

    The possibility of sharing one's point of view makes use of wearable cameras compelling. These videos are often long, boring and coupled with extreme shake, as the camera is worn on a moving person. Fast forwarding (i.e. frame sampling) is a natural choice for quick video browsing. However, this accentuates the shake caused by natural head motion in an egocentric video, making the fast forwarded video useless. We propose EgoSampling, an adaptive frame sampling that gives stable, fast forwarde...

  3. Advanced real-time manipulation of video streams

    CERN Document Server

    Herling, Jan

    2014-01-01

    Diminished Reality is a new fascinating technology that removes real-world content from live video streams. This sensational live video manipulation actually removes real objects and generates a coherent video stream in real-time. Viewers cannot detect modified content. Existing approaches are restricted to moving objects and static or almost static cameras and do not allow real-time manipulation of video content. Jan Herling presents a new and innovative approach for real-time object removal with arbitrary camera movements.

  4. Feature Quantization and Pooling for Videos

    Science.gov (United States)

    2014-05-01

    similar. 1.2 Context Video has become a very popular media for communication, entertainment , and science. Videos are widely used in educational...The same approach applied to action classification from YouTube videos of sport events shows that BoW approaches on real world data sets need further...dog videos, where the camera also tracks the people and animals . In Figure 4.38 we compare across action classes how well each segmentation

  5. Digital Low Frequency Radio Camera

    Science.gov (United States)

    Fullekrug, M.; Mezentsev, A.; Soula, S.; van der Velde, O.; Poupeney, J.; Sudre, C.; Gaffet, S.; Pincon, J.

    2012-04-01

    This contribution reports the design, realization and operation of a novel digital low frequency radio camera towards an exploration of the Earth's electromagnetic environment with particular emphasis on lightning discharges and subsequent atmospheric effects such as transient luminous events. The design of the digital low frequency radio camera is based on the idea of radio interferometry with a network of radio receivers which are separated by spatial baselines comparable to the wavelength of the observed radio waves, i.e., ~1-100 km which corresponds to a frequency range from ~3-300 kHz. The key parameter towards the realization of the radio interferometer is the frequency dependent slowness of the radio waves within the Earth's atmosphere with respect to the speed of light in vacuum. This slowness is measured with the radio interferometer by using well documented radio transmitters. The digital low frequency radio camera can be operated in different modes. In the imaging mode, still photographs show maps of the low frequency radio sky. In the video mode, movies show the dynamics of the low frequency radio sky. The exposure time of the photograhps, the frame rate of the video, and the radio frequency of interest can be adjusted by the observer. Alternatively, the digital radio camera can be used in the monitoring mode, where a particular area of the sky is observed continuously. The first application of the digital low frequency radio camera is to characterize the electromagnetic energy emanating from sprite producing lightning discharges, but it is expected that it can also be used to identify and investigate numerous other radio sources of the Earth's electromagnetic environment.

  6. GRAVITY acquisition camera: characterization results

    Science.gov (United States)

    Anugu, Narsireddy; Garcia, Paulo; Amorim, Antonio; Wiezorrek, Erich; Wieprecht, Ekkehard; Eisenhauer, Frank; Ott, Thomas; Pfuhl, Oliver; Gordo, Paulo; Perrin, Guy; Brandner, Wolfgang; Straubmeier, Christian; Perraut, Karine

    2016-08-01

    GRAVITY acquisition camera implements four optical functions to track multiple beams of Very Large Telescope Interferometer (VLTI): a) pupil tracker: a 2×2 lenslet images four pupil reference lasers mounted on the spiders of telescope secondary mirror; b) field tracker: images science object; c) pupil imager: reimages telescope pupil; d) aberration tracker: images a Shack-Hartmann. The estimation of beam stabilization parameters from the acquisition camera detector image is carried out, for every 0.7 s, with a dedicated data reduction software. The measured parameters are used in: a) alignment of GRAVITY with the VLTI; b) active pupil and field stabilization; c) defocus correction and engineering purposes. The instrument is now successfully operational on-sky in closed loop. The relevant data reduction and on-sky characterization results are reported.

  7. CALIBRATION PROCEDURES IN MID FORMAT CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    F. Pivnicka

    2012-07-01

    Full Text Available A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU, the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and

  8. What Counts as Educational Video?: Working toward Best Practice Alignment between Video Production Approaches and Outcomes

    Science.gov (United States)

    Winslett, Greg

    2014-01-01

    The twenty years since the first digital video camera was made commercially available has seen significant increases in the use of low-cost, amateur video productions for teaching and learning. In the same period, production and consumption of professionally produced video has also increased, as has the distribution platforms to access it.…

  9. Reliability of video-based identification of footstrike pattern and video time frame at initial contact in recreational runners

    DEFF Research Database (Denmark)

    Damsted, Camma; Larsen, L H; Nielsen, R.O.

    2015-01-01

    and video time frame at initial contact during treadmill running using two-dimensional (2D) video recordings. METHODS: Thirty-one recreational runners were recorded twice, 1 week apart, with a high-speed video camera. Two blinded raters evaluated each video twice with an interval of at least 14 days...

  10. 4K Video-Laryngoscopy and Video-Stroboscopy: Preliminary Findings.

    Science.gov (United States)

    Woo, Peak

    2016-01-01

    4K video is a new format. At 3840 × 2160 resolution, it has 4 times the resolution of standard 1080 high definition (HD) video. Magnification can be done without loss of resolution. This study uses 4K video for video-stroboscopy. Forty-six patients were examined by conventional video-stroboscopy (digital 3 chip CCD) and compared with 4K video-stroboscopy. The video was recorded on a Blackmagic 4K cinema production camera in CinemaDNG RAW format. The video was played back on a 4K monitor and compared to standard video. Pathological conditions included: polyps, scar, cysts, cancer, sulcus, and nodules. Successful 4K video recordings were achieved in all subjects using a 70° rigid endoscope. The camera system is bulky. The examination is performed similarly to standard video-stroboscopy. Playback requires a 4K monitor. As expected, the images were far clearer in detail than standard video. Stroboscopy video using the 4K camera was consistently able to show more detail. Two patients had diagnosis change after 4K viewing. 4K video is an exciting new technology that can be applied to laryngoscopy. It allows for cinematic 4K quality recordings. Both continuous and stroboscopic light can be used for visualization. Its clinical utility is feasible, but usefulness must be proven. © The Author(s) 2015.

  11. Incremental activity modeling in multiple disjoint cameras.

    Science.gov (United States)

    Loy, Chen Change; Xiang, Tao; Gong, Shaogang

    2012-09-01

    Activity modeling and unusual event detection in a network of cameras is challenging, particularly when the camera views are not overlapped. We show that it is possible to detect unusual events in multiple disjoint cameras as context-incoherent patterns through incremental learning of time delayed dependencies between distributed local activities observed within and across camera views. Specifically, we model multicamera activities using a Time Delayed Probabilistic Graphical Model (TD-PGM) with different nodes representing activities in different decomposed regions from different views and the directed links between nodes encoding their time delayed dependencies. To deal with visual context changes, we formulate a novel incremental learning method for modeling time delayed dependencies that change over time. We validate the effectiveness of the proposed approach using a synthetic data set and videos captured from a camera network installed at a busy underground station.

  12. Low Cost Wireless Network Camera Sensors for Traffic Monitoring

    Science.gov (United States)

    2012-07-01

    Many freeways and arterials in major cities in Texas are presently equipped with video detection cameras to : collect data and help in traffic/incident management. In this study, carefully controlled experiments determined : the throughput and output...

  13. Optimising Camera Traps for Monitoring Small Mammals

    Science.gov (United States)

    Glen, Alistair S.; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera’s field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera’s field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2–2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera’s field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps. PMID:23840790

  14. Photovoltaic mounting/demounting unit

    DEFF Research Database (Denmark)

    2014-01-01

    The present invention relates to a photovoltaic arrangement comprising a photovoltaic assembly comprising a support structure defining a mounting surface onto which a photovoltaic module is detachably mounted; and a mounting/demounting unit comprising at least one mounting/demounting apparatus...... which when the mounting/demounting unit is moved along the mounting surface, causes the photovoltaic module to be mounted or demounted to the support structure; wherein the photovoltaic module comprises a carrier foil and wherein a total thickness of the photo voltaic module is below 500 muiotaeta....... The present invention further relates to an associated method for mounting/demounting photovoltaic modules....

  15. Mobile Video in Everyday Social Interactions

    Science.gov (United States)

    Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

    Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

  16. Firefly: A HOT camera core for thermal imagers with enhanced functionality

    Science.gov (United States)

    Pillans, Luke; Harmer, Jack; Edwards, Tim

    2015-06-01

    Raising the operating temperature of mercury cadmium telluride infrared detectors from 80K to above 160K creates new applications for high performance infrared imagers by vastly reducing the size, weight and power consumption of the integrated cryogenic cooler. Realizing the benefits of Higher Operating Temperature (HOT) requires a new kind of infrared camera core with the flexibility to address emerging applications in handheld, weapon mounted and UAV markets. This paper discusses the Firefly core developed to address these needs by Selex ES in Southampton UK. Firefly represents a fundamental redesign of the infrared signal chain reducing power consumption and providing compatibility with low cost, low power Commercial Off-The-Shelf (COTS) computing technology. This paper describes key innovations in this signal chain: a ROIC purpose built to minimize power consumption in the proximity electronics, GPU based image processing of infrared video, and a software customisable infrared core which can communicate wirelessly with other Battlespace systems.

  17. Advanced centering of mounted optics

    Science.gov (United States)

    Wenzel, Christian; Winkelmann, Ralf; Klar, Rainer; Philippen, Peter; Garden, Ron; Pearlman, Sasha; Pearlman, Guy

    2016-03-01

    Camera objectives or laser focusing units consist of complex lens systems with multiple lenses. The optical performance of such complex lens systems is dependent on the correct positioning of lenses in the system. Deviations in location or angle within the system directly affect the achievable image quality. To optimize the achievable performance of lens systems, these errors can be corrected by machining the mount of the lens with respect to the optical axis. The Innolite GmbH and Opto Alignment Technology have developed a novel machine for such center turning operation. A confocal laser reflection measurement sensor determines the absolute position of the optical axis with reference to the spindle axis. As a strong advantage compared to autocollimator measurements the utilized Opto Alignment sensor is capable of performing centration and tilt measurements without changing objectives on any radius surface from 2 mm to infinity and lens diameters from 0.5 mm to 300 mm, including cylinder, aspheric, and parabolic surfaces. In addition, it performs significantly better on coated lenses. The optical axis is skewed and offset in reference to the spindle axis as determined by the measurement. Using the information about the mount and all reference surfaces, a machine program for an untrue turning process is calculated from this data in a fully automated manner. Since the optical axis is not collinear with the spindle axis, the diamond tool compensates for these linear and tilt deviations with small correction movements. This results in a simple machine setup where the control system works as an electronic alignment chuck. Remaining eccentricity of <1 μm and angular errors of < 10 sec are typical alignment results.

  18. 2011 Japan tsunami survivor video based hydrograph and flow velocity measurements using LiDAR

    Science.gov (United States)

    Fritz, H. M.; Phillips, D. A.; Okayasu, A.; Shimozono, T.; Liu, H.; Mohammed, F.; Skanavis, V.; Synolakis, C. E.; Takahashi, T.

    2012-04-01

    On March 11, 2011, a magnitude Mw 9.0 earthquake occurred off the coast of Japan's Tohoku region causing catastrophic damage and loss of life. Numerous tsunami reconnaissance trips were conducted in Japan (Tohoku Earthquake and Tsunami Joint Survey Group). This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Yoriisohama, Kesennuma, Kamaishi and Miyako along Japan's Sanriku coast and the subsequent video image calibration, processing, tsunami hydrograph and flow velocity analysis. Selected tsunami video recording sites were visited, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance from April 9 to 25. A follow-up survey from June 9 to 15, 2011 focused on terrestrial laser scanning (TLS) at locations with previously identified high quality eyewitness videos. We acquired precise topographic data using TLS at nine video sites with multiple scans acquired from different instrument positions at each site. These ground-based LiDAR measurements produce a 3-dimensional "point cloud" dataset. Digital photography from a scanner-mounted camera yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing of the TLS data in an absolute reference frame such as WGS84. We deployed a Riegl VZ-400 scanner (1550 nm wavelength laser, 42,000 measurements/second, requires the calibration of the sector of view present in the eyewitness video recording based on visually identifiable ground control points measured in the LiDAR point cloud data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent raw color images by means of planar particle image velocimetry (PIV) applied to fixed objects in the field of view. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates. The mapping from video frame to real world coordinates follows the direct linear

  19. 2011 Japan tsunami current and flow velocity measurements from survivor videos using LiDAR

    Science.gov (United States)

    Fritz, H. M.; Phillips, D. A.; Okayasu, A.; Shimozono, T.; Liu, H.; Mohammed, F.; Skanavis, V.; Synolakis, C.; Takahashi, T.

    2011-12-01

    On March 11, 2011, a magnitude Mw 9.0 earthquake occurred off the coast of Japan's Tohoku region causing catastrophic damage and loss of life. Numerous tsunami reconnaissance trips were conducted in Japan (Tohoku Earthquake and Tsunami Joint Survey Group). This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Yoriisohama, Kesennuma, Kamaishi and Miyako along Japan's Sanriku coast and the subsequent video image calibration, processing and tsunami flow velocity analysis. Selected tsunami video recording sites were visited, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance from April 9 to 25. A follow-up survey from June 9 to 15, 2011 focused on terrestrial laser scanning (TLS) at locations with previously identified high quality eyewitness videos. We acquired precise topographic data using TLS at nine video sites with multiple scans acquired from different instrument positions at each site. These ground-based LiDAR measurements produce a 3-dimensional "point cloud" dataset. Digital photography from a scanner-mounted camera yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing of the TLS data in an absolute reference frame such as WGS84. We deployed a Riegl VZ-400 scanner (1550 nm wavelength laser, 42,000 measurements/second, requires the calibration of the sector of view present in the eyewitness video recording based on visually identifiable ground control points measured in the LiDAR point cloud data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent raw color images by means of planar particle image velocimetry (PIV) applied to fixed objects in the field of view. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates. The mapping from video frame to real world coordinates follows the direct linear transformation

  20. A Semi-Automatic, Remote-Controlled Video Observation System for Transient Luminous Events

    Science.gov (United States)

    Allin, T.; Neubert, T.; Laursen, S.; Rasmussen, I. L.; Soula, S.

    2003-12-01

    In support for global ELF/VLF observations, HF measurements in France, and conjugate photometry/VLF observations in South Africa, we developed and operated a semi-automatic, remotely controlled video system for the observation of middle-atmospheric transient luminous events (TLEs). Installed at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over serial links from a local computer, and the video outputs were distributed to a pair of PCI frame grabbers in the computer. This setup allowed remote users to log in and operate the system over the internet. Event detection software provided means of recording and time-stamping single TLE video fields and thus eliminated the need for continuous human monitoring of TLE activity. The computer recorded and analyzed two parallel video streams at the full 50 Hz field rate, while uploading status images, TLE images, and system logs to a remote web server. The system detected more than 130 TLEs - mostly sprites - distributed over 9 active evenings. We have thus demonstrated the feasibility of remote agents for TLE observations, which are likely to find use in future ground-based TLE observation campaigns, or to be installed at remote sites in support for space-borne or other global TLE observation efforts.

  1. Development of camera technology for monitoring nests. Chapter 15

    Science.gov (United States)

    W. Andrew Cox; M. Shane Pruett; Thomas J. Benson; Scott J. Chiavacci; Frank R., III Thompson

    2012-01-01

    Photo and video technology has become increasingly useful in the study of avian nesting ecology. However, researchers interested in using camera systems are often faced with insufficient information on the types and relative advantages of available technologies. We reviewed the literature for studies of nests that used cameras and summarized them based on study...

  2. A GRAPH BASED BUNDLE ADJUSTMENT FOR INS-CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    D. Bender

    2013-08-01

    Full Text Available In this paper, we present a graph based approach for performing the system calibration of a sensor suite containing a fixed mounted camera and an inertial navigation system. The aim of the presented work is to obtain accurate direct georeferencing of camera images collected with small unmanned aerial systems. Prerequisite for using the pose measurements from the inertial navigation system as exterior orientation for the camera is the knowledge of the static offsets between these devices. Furthermore, the intrinsic parameters of the camera obtained in a laboratory tend to deviate slightly from the values during flights. This induces an in-flight calibration of the intrinsic camera parameters in addition to the mounting offsets between the two devices. The optimization of these values can be done by introducing them as parameters into a bundle adjustment process. We show how to solve this by exploiting a graph optimization framework, which is designed for the least square optimization of general error functions.

  3. a Graph Based Bundle Adjustment for Ins-Camera Calibration

    Science.gov (United States)

    Bender, D.; Schikora, M.; Sturm, J.; Cremers, D.

    2013-08-01

    In this paper, we present a graph based approach for performing the system calibration of a sensor suite containing a fixed mounted camera and an inertial navigation system. The aim of the presented work is to obtain accurate direct georeferencing of camera images collected with small unmanned aerial systems. Prerequisite for using the pose measurements from the inertial navigation system as exterior orientation for the camera is the knowledge of the static offsets between these devices. Furthermore, the intrinsic parameters of the camera obtained in a laboratory tend to deviate slightly from the values during flights. This induces an in-flight calibration of the intrinsic camera parameters in addition to the mounting offsets between the two devices. The optimization of these values can be done by introducing them as parameters into a bundle adjustment process. We show how to solve this by exploiting a graph optimization framework, which is designed for the least square optimization of general error functions.

  4. Economical Video Monitoring of Traffic

    Science.gov (United States)

    Houser, B. C.; Paine, G.; Rubenstein, L. D.; Parham, O. Bruce, Jr.; Graves, W.; Bradley, C.

    1986-01-01

    Data compression allows video signals to be transmitted economically on telephone circuits. Telephone lines transmit television signals to remote traffic-control center. Lines also carry command signals from center to TV camera and compressor at highway site. Video system with television cameras positioned at critical points on highways allows traffic controllers to determine visually, almost immediately, exact cause of traffic-flow disruption; e.g., accidents, breakdowns, or spills, almost immediately. Controllers can then dispatch appropriate emergency services and alert motorists to minimize traffic backups.

  5. The Future of Video

    OpenAIRE

    Li, F.

    2016-01-01

    Executive Summary \\ud \\ud A range of technological innovations (e.g. smart phones and digital cameras), infrastructural advances (e.g. broadband and 3G/4G wireless networks) and platform developments (e.g. YouTube, Facebook, Snapchat, Instagram, Amazon, and Netflix) are collectively transforming the way video is produced, distributed, consumed, archived – and importantly, monetised. Changes have been observed well beyond the mainstream TV and film industries, and these changes are increasingl...

  6. Creating Gaze Annotations in Head Mounted Displays

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Qvarfordt, Pernilla

    2015-01-01

    , the user simply captures an image using the HMD’s camera, looks at an object of interest in the image, and speaks out the information to be associated with the object. The gaze location is recorded and visualized with a marker. The voice is transcribed using speech recognition. Gaze annotations can......To facilitate distributed communication in mobile settings, we developed GazeNote for creating and sharing gaze annotations in head mounted displays (HMDs). With gaze annotations it possible to point out objects of interest within an image and add a verbal description. To create an annota- tion...

  7. Solid State Replacement of Rotating Mirror Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Frank, A M; Bartolick, J M

    2006-08-25

    Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed ''In-situ Storage Image Sensor'' or ''ISIS'', by Prof. Goji Etoh, has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.

  8. Video Stabilization Using Feature Point Matching

    Science.gov (United States)

    Kulkarni, Shamsundar; Bormane, D. S.; Nalbalwar, S. L.

    2017-01-01

    Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper an algorithm is proposed to stabilize jittery videos. A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video is identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.

  9. Modification of the Miyake-Apple technique for simultaneous anterior and posterior video imaging of wet laboratory-based corneal surgery.

    Science.gov (United States)

    Tan, Johnson C H; Meadows, Howard; Gupta, Aanchal; Yeung, Sonia N; Moloney, Gregory

    2014-03-01

    The aim of this study was to describe a modification of the Miyake-Apple posterior video analysis for the simultaneous visualization of the anterior and posterior corneal surfaces during wet laboratory-based deep anterior lamellar keratoplasty (DALK). A human donor corneoscleral button was affixed to a microscope slide and placed onto a custom-made mounting box. A big bubble DALK was performed on the cornea in the wet laboratory. An 11-diopter intraocular lens was positioned over the aperture of the back camera of an iPhone. This served to video record the posterior view of the corneoscleral button during the big bubble formation. An overhead operating microscope with an attached video camcorder recorded the anterior view during the surgery. The anterior and posterior views of the wet laboratory-based DALK surgery were simultaneously captured and edited using video editing software. The formation of the big bubble can be studied. This video recording camera system has the potential to act as a valuable research and teaching tool in corneal lamellar surgery, especially in the behavior of the big bubble formation in DALK.

  10. What do we do with all this video? Better understanding public engagement for image and video annotation

    Science.gov (United States)

    Wiener, C.; Miller, A.; Zykov, V.

    2016-12-01

    Advanced robotic vehicles are increasingly being used by oceanographic research vessels to enable more efficient and widespread exploration of the ocean, particularly the deep ocean. With cutting-edge capabilities mounted onto robotic vehicles, data at high resolutions is being generated more than ever before, enabling enhanced data collection and the potential for broader participation. For example, high resolution camera technology not only improves visualization of the ocean environment, but also expands the capacity to engage participants remotely through increased use of telepresence and virtual reality techniques. Schmidt Ocean Institute is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation and analysis, and open sharing of information. Telepresence-enabled research is an important component of Schmidt Ocean Institute's science research cruises, which this presentation will highlight. Schmidt Ocean Institute is one of the only research programs that make their entire underwater vehicle dive series available online, creating a collection of video that enables anyone to follow deep sea research in real time. We encourage students, educators and the general public to take advantage of freely available dive videos. Additionally, other SOI-supported internet platforms, have engaged the public in image and video annotation activities. Examples of these new online platforms, which utilize citizen scientists to annotate scientific image and video data will be provided. This presentation will include an introduction to SOI-supported video and image tagging citizen science projects, real-time robot tracking, live ship-to-shore communications, and an array of outreach activities that enable scientists to interact with the public and explore the ocean in fascinating detail.

  11. Towards User Experience-Driven Adaptive Uplink Video Transmission for Automotive Applications

    OpenAIRE

    Lottermann, Christian

    2016-01-01

    The focus of this thesis is to enable user experience-driven uplink video streaming from mobile video sources with limited computational capacity and to apply these to resource-constraint automotive environments. The first part investigates perceptual quality-aware encoding of videos, the second part proposes camera context-based estimators of temporal and spatial activities for videos captured by a front-facing camera of a vehicle, and the last part studies the upstreaming of videos from a m...

  12. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  13. Real-time quality control on a smart camera

    Science.gov (United States)

    Xiao, Chuanwei; Zhou, Huaide; Li, Guangze; Hao, Zhihang

    2006-01-01

    A smart camera is composed of a video sensing, high-level video processing, communication and other affiliations within a single device. Such cameras are very important devices in quality control systems. This paper presents a prototyping development of a smart camera for quality control. The smart camera is divided to four parts: a CMOS sensor, a digital signal processor (DSP), a CPLD and a display device. In order to improving the processing speed, low-level and high-level video processing algorithms are discussed to the embedded DSP-based platforms. The algorithms can quickly and automatic detect productions' quality defaults. All algorithms are tested under a Matlab-based prototyping implementation and migrated to the smart camera. The smart camera prototype automatic processes the video data and streams the results of the video data to the display devices and control devices. Control signals are send to produce-line to adjust the producing state within the required real-time constrains.

  14. Establishing a common coordinate view in multiple moving aerial cameras

    Science.gov (United States)

    Sheikh, Yaser; Gritai, Alexei; Junejo, Imran; Muise, Robert; Mahalanobis, Abhijit; Shah, Mubarak

    2005-05-01

    A camera mounted on an aerial vehicle provides an excellent means of monitoring large areas of a scene. Utilizing several such cameras on different aerial vehicles allows further flexibility, in terms of increased visual scope and in the pursuit of multiple targets. The underlying concept of such co-operative sensing is to use inter-camera relationships to give global context to 'locally' obtained information at each camera. It is desirable, therefore, that the data collected at each camera and the inter-camera relationship discerned by the system be presented in a coherent visualization. Since the cameras are mounted on UAVs, large swaths of areas may be traversed in a short period of time, coherent visualization is indispensable for applications like surveillance and reconnaissance. While most visualization approaches have hitherto focused on data from a single camera at a time, as a consequence of tracking objects across cameras, we show that widely separated mosaics can be aligned, both in space and color, for concurrent visualization. Results are shown on a number of real sequences, validating our qualitative models.

  15. Video Conferencing for a Virtual Seminar Room

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Fosgerau, A.; Hansen, Peter Søren K.

    2002-01-01

    A PC-based video conferencing system for a virtual seminar room is presented. The platform is enhanced with DSPs for audio and video coding and processing. A microphone array is used to facilitate audio based speaker tracking, which is used for adaptive beam-forming and automatic camera-control...

  16. Video systems for alarm assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greenwoll, D.A.; Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States)); Ebel, P.E. (BE, Inc., Barnwell, SC (United States))

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs.

  17. Face Recognition and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Swapnil Vitthal Tathe

    2017-07-01

    Full Text Available Advancement in computer vision technology and availability of video capturing devices such as surveillance cameras has evoked new video processing applications. The research in video face recognition is mostly biased towards law enforcement applications. Applications involves human recognition based on face and iris, human computer interaction, behavior analysis, video surveillance etc. This paper presents face tracking framework that is capable of face detection using Haar features, recognition using Gabor feature extraction, matching using correlation score and tracking using Kalman filter. The method has good recognition rate for real-life videos and robust performance to changes due to illumination, environmental factors, scale, pose and orientations.

  18. An Evaluation of Video-to-Video Face Verification

    NARCIS (Netherlands)

    Poh, N.; Chan, C.H.; Kittler, J.; Marcel, S.; Mc Cool, C.; Argones Rúa, E.; Alba Castro, J.L.; Villegas, M.; Paredes, R.; Štruc, V.; Pavešić, N.; Salah, A.A.; Fang, H.; Costen, N.

    2010-01-01

    Person recognition using facial features, e.g., mug-shot images, has long been used in identity documents. However, due to the widespread use of web-cams and mobile devices embedded with a camera, it is now possible to realize facial video recognition, rather than resorting to just still images. In

  19. VLSI-distributed architectures for smart cameras

    Science.gov (United States)

    Wolf, Wayne H.

    2001-03-01

    Smart cameras use video/image processing algorithms to capture images as objects, not as pixels. This paper describes architectures for smart cameras that take advantage of VLSI to improve the capabilities and performance of smart camera systems. Advances in VLSI technology aid in the development of smart cameras in two ways. First, VLSI allows us to integrate large amounts of processing power and memory along with image sensors. CMOS sensors are rapidly improving in performance, allowing us to integrate sensors, logic, and memory on the same chip. As we become able to build chips with hundreds of millions of transistors, we will be able to include powerful multiprocessors on the same chip as the image sensors. We call these image sensor/multiprocessor systems image processors. Second, VLSI allows us to put a large number of these powerful sensor/processor systems on a single scene. VLSI factories will produce large quantities of these image processors, making it cost-effective to use a large number of them in a single location. Image processors will be networked into distributed cameras that use many sensors as well as the full computational resources of all the available multiprocessors. Multiple cameras make a number of image recognition tasks easier: we can select the best view of an object, eliminate occlusions, and use 3D information to improve the accuracy of object recognition. This paper outlines approaches to distributed camera design: architectures for image processors and distributed cameras; algorithms to run on distributed smart cameras, and applications of which VLSI distributed camera systems.

  20. Video Recording With a GoPro in Hand and Upper Extremity Surgery.

    Science.gov (United States)

    Vara, Alexander D; Wu, John; Shin, Alexander Y; Sobol, Gregory; Wiater, Brett

    2016-10-01

    Video recordings of surgical procedures are an excellent tool for presentations, analyzing self-performance, illustrating publications, and educating surgeons and patients. Recording the surgeon's perspective with high-resolution video in the operating room or clinic has become readily available and advances in software improve the ease of editing these videos. A GoPro HERO 4 Silver or Black was mounted on a head strap and worn over the surgical scrub cap, above the loupes of the operating surgeon. Five live surgical cases were recorded with the camera. The videos were uploaded to a computer and subsequently edited with iMovie or the GoPro software. The optimal settings for both the Silver and Black editions, when operating room lights are used, were determined to be a narrow view, 1080p, 60 frames per second (fps), spot meter on, protune on with auto white balance, exposure compensation at -0.5, and without a polarizing lens. When the operating room lights were not used, it was determined that the standard settings for a GoPro camera were ideal for positioning and editing (4K, 15 frames per second, spot meter and protune off). The GoPro HERO 4 provides high-quality, the surgeon perspective, and a cost-effective video recording of upper extremity surgical procedures. Challenges include finding the optimal settings for each surgical procedure and the length of recording due to battery life limitations. Copyright © 2016 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  1. Lights, Camera, AG-Tion: Promoting Agricultural and Environmental Education on Camera

    Science.gov (United States)

    Fuhrman, Nicholas E.

    2016-01-01

    Viewing of online videos and television segments has become a popular and efficient way for Extension audiences to acquire information. This article describes a unique approach to teaching on camera that may help Extension educators communicate their messages with comfort and personality. The S.A.L.A.D. approach emphasizes using relevant teaching…

  2. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  3. Video essay

    DEFF Research Database (Denmark)

    2015-01-01

    Camera movement has a profound influence on the way films look and the way films are experienced by spectators. In this visual essay Jakob Isak Nielsen proposes six major functions of camera movement in narrative cinema. Individual camera movements may serve more of these functions at the same ti...

  4. Innovative Video Diagnostic Equipment for Material Science

    Science.gov (United States)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  5. Intelligent Model for Video Survillance Security System

    Directory of Open Access Journals (Sweden)

    J. Vidhya

    2013-12-01

    Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.

  6. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  7. Composing with Images: A Study of High School Video Producers.

    Science.gov (United States)

    Reilly, Brian

    At Bell High School (Los Angeles, California), students have been using video cameras, computers and editing machines to create videos in a variety of forms and on a variety of topics; in this setting, video is the textual medium of expression. A study was conducted using participant-observation and interviewing over the course of one school year…

  8. Teacher Self-Captured Video: Learning to See

    Science.gov (United States)

    Sherin, Miriam Gamoran; Dyer, Elizabeth B.

    2017-01-01

    Videos are often used for demonstration and evaluation, but a more productive approach would be using video to support teachers' ability to notice and interpret classroom interactions. That requires thinking carefully about the physical aspects of shooting video--where the camera is placed and how easily student interactions can be heard--as well…

  9. A Method for Estimating Surveillance Video Georeferences

    Directory of Open Access Journals (Sweden)

    Aleksandar Milosavljević

    2017-07-01

    Full Text Available The integration of a surveillance camera video with a three-dimensional (3D geographic information system (GIS requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM of the area of interest. Once an adequate number of points are matched, Levenberg–Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.

  10. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen...

  11. Digital Pinhole Camera

    Science.gov (United States)

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  12. Video Analytics for Business Intelligence

    CERN Document Server

    Porikli, Fatih; Xiang, Tao; Gong, Shaogang

    2012-01-01

    Closed Circuit TeleVision (CCTV) cameras have been increasingly deployed pervasively in public spaces including retail centres and shopping malls. Intelligent video analytics aims to automatically analyze content of massive amount of public space video data and has been one of the most active areas of computer vision research in the last two decades. Current focus of video analytics research has been largely on detecting alarm events and abnormal behaviours for public safety and security applications. However, increasingly CCTV installations have also been exploited for gathering and analyzing business intelligence information, in order to enhance marketing and operational efficiency. For example, in retail environments, surveillance cameras can be utilised to collect statistical information about shopping behaviour and preference for marketing (e.g., how many people entered a shop; how many females/males or which age groups of people showed interests to a particular product; how long did they stay in the sho...

  13. Diversity-Aware Multi-Video Summarization

    Science.gov (United States)

    Panda, Rameswar; Mithun, Niluthpol Chowdhury; Roy-Chowdhury, Amit K.

    2017-10-01

    Most video summarization approaches have focused on extracting a summary from a single video; we propose an unsupervised framework for summarizing a collection of videos. We observe that each video in the collection may contain some information that other videos do not have, and thus exploring the underlying complementarity could be beneficial in creating a diverse informative summary. We develop a novel diversity-aware sparse optimization method for multi-video summarization by exploring the complementarity within the videos. Our approach extracts a multi-video summary which is both interesting and representative in describing the whole video collection. To efficiently solve our optimization problem, we develop an alternating minimization algorithm that minimizes the overall objective function with respect to one video at a time while fixing the other videos. Moreover, we introduce a new benchmark dataset, Tour20, that contains 140 videos with multiple human created summaries, which were acquired in a controlled experiment. Finally, by extensive experiments on the new Tour20 dataset and several other multi-view datasets, we show that the proposed approach clearly outperforms the state-of-the-art methods on the two problems-topic-oriented video summarization and multi-view video summarization in a camera network.

  14. 2011 Tohoku tsunami hydrographs, currents, flow velocities and ship tracks based on video and TLS measurements

    Science.gov (United States)

    Fritz, Hermann M.; Phillips, David A.; Okayasu, Akio; Shimozono, Takenori; Liu, Haijiang; Takeda, Seiichi; Mohammed, Fahad; Skanavis, Vassilis; Synolakis, Costas E.; Takahashi, Tomoyuki

    2013-04-01

    The March 11, 2011, magnitude Mw 9.0 earthquake off the Tohoku coast of Japan caused catastrophic damage and loss of life to a tsunami aware population. The mid-afternoon tsunami arrival combined with survivors equipped with cameras on top of vertical evacuation buildings provided fragmented spatially and temporally resolved inundation recordings. This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Myako, Kamaishi, Kesennuma and Yoriisohama along Japan's Sanriku coast and the subsequent video image calibration, processing, tsunami hydrograph and flow velocity analysis. Selected tsunami video recording sites were explored, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance in April, 2011. A follow-up survey in June, 2011 focused on terrestrial laser scanning (TLS) at locations with high quality eyewitness videos. We acquired precise topographic data using TLS at the video sites producing a 3-dimensional "point cloud" dataset. A camera mounted on the Riegl VZ-400 scanner yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing. The original video recordings were recovered from eyewitnesses and the Japanese Coast Guard (JCG). The analysis of the tsunami videos follows an adapted four step procedure originally developed for the analysis of 2004 Indian Ocean tsunami videos at Banda Aceh, Indonesia (Fritz et al., 2006). The first step requires the calibration of the sector of view present in the eyewitness video recording based on ground control points measured in the LiDAR data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent images by particle image velocimetry (PIV) applied to fixed objects. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates with a direct linear transformation (DLT) procedure. Finally, the

  15. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    Full Text Available Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first

  16. Calibration Procedures on Oblique Camera Setups

    Science.gov (United States)

    Kemper, G.; Melykuti, B.; Yu, C.

    2016-06-01

    Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of

  17. Rebuilding Mount St. Helens

    Science.gov (United States)

    Schilling, Steve P.; Ramsey, David W.; Messerich, James A.; Thompson, Ren A.

    2006-01-01

    On May 18, 1980, Mount St. Helens, Washington exploded in a spectacular and devastating eruption that shocked the world. The eruption, one of the most powerful in the history of the United States, removed 2.7 cubic kilometers of rock from the volcano's edifice, the bulk of which had been constructed by nearly 4,000 years of lava-dome-building eruptions. In seconds, the mountain's summit elevation was lowered from 2,950 meters to 2,549 meters, leaving a north-facing, horseshoe-shaped crater over 2 kilometers wide. Following the 1980 eruption, Mount St. Helens remained active. A large lava dome began episodically extruding in the center of the volcano's empty crater. This dome-building eruption lasted until 1986 and added about 80 million cubic meters of rock to the volcano. During the two decades following the May 18, 1980 eruption, Crater Glacier formed tongues of ice around the east and west sides of the lava dome in the deeply shaded niche between the lava dome and the south crater wall. Long the most active volcano in the Cascade Range with a complex 300,000-year history, Mount St. Helens erupted again in the fall of 2004 as a new period of dome building began within the 1980 crater. Between October 2004 and February 2006, about 80 million cubic meters of dacite lava erupted immediately south of the 1980-86 lava dome. The erupting lava separated the glacier into two parts, first squeezing the east arm of the glacier against the east crater wall and then causing equally spectacular crevassing and broad uplift of the glacier's west arm. Vertical aerial photographs document dome growth and glacier deformation. These photographs enabled photogrammetric construction of a series of high-resolution digital elevation models (DEMs) showing changes from October 4, 2004 to February 9, 2006. From the DEMs, Geographic Information Systems (GIS) applications were used to estimate extruded volumes and growth rates of the new lava dome. The DEMs were also used to quantify dome

  18. Clamp-mount device

    Science.gov (United States)

    Clark, K. H. (Inventor)

    1983-01-01

    A clamp-mount device is disclosed for mounting equipment to an associated I-beam and the like structural member of the type having oppositely extending flanges wherein the device comprises a base and a pair of oppositely facing clamping members carried diagonally on the base clamping flanges therebetween and having flange receiving openings facing one another. Lock means are carried diagonally by the base opposite the clamping members locking the flanges in the clamping members. A resilient hub is carried centrally of the base engaging and biasing a back side of the flanges maintaining tightly clamped and facilitating use on vertical as well as horizontal members. The base turns about the hub to receive the flanges within the clamping members. Equipment may be secured to the base by any suitable means such as bolts in openings. Slidable gate latches secure the hinged locks in an upright locking position. The resilient hub includes a recess opening formed in the base and a rubber-like pad carried in this opening being depressably and rotatably carried therein.

  19. The Use of Camera Traps in Wildlife

    Directory of Open Access Journals (Sweden)

    Yasin Uçarlı

    2013-11-01

    Full Text Available Camera traps are increasingly used in the abundance and density estimates of wildlife species. Camera traps are very good alternative for direct observation in case, particularly, steep terrain, dense vegetation covered areas or nocturnal species. The main reason for the use of camera traps is eliminated that the economic, personnel and time loss in a continuous manner at the same time in different points. Camera traps, motion and heat sensitive, can take a photo or video according to the models. Crossover points and feeding or mating areas of the focal species are addressed as a priority camera trap set locations. The population size can be finding out by the images combined with Capture-Recapture methods. The population density came out the population size divided to effective sampling area size. Mating and breeding season, habitat choice, group structures and survival rates of the focal species can be achieved from the images. Camera traps are very useful to obtain the necessary data about the particularly mysterious species with economically in planning and conservation efforts.

  20. SXI Prototype mirror mount

    Science.gov (United States)

    1995-01-01

    This final report describes the work performed from June 1993 to January 1995. The purpose of this contract was to provide optomechanical engineering and fabrication support to the Solar X-ray Imager (SXI) program in the areas of mirror, optical bench and camera assemblies of the telescope. The Center for Applied Optics (CAO) worked closely with the Optics and S&E technical staff of MSFC to develop and investigate the most viable and economical options for the design and fabrication of a number of parts for the various telescope assemblies. All the tasks under this delivery order have been successfully completed within budget and schedule.

  1. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  2. Production of 360° video : Introduction to 360° video and production guidelines

    OpenAIRE

    Ghimire, Sujan

    2016-01-01

    The main goal of this thesis project is to introduce latest media technology and provide a complete guideline. This project is based on the production of 360° video by using multiple GoPro cameras. This project was the first 360° video project at Helsinki Metropolia University of Applied Sciences. 360° video is a video with a totally different viewing experience and incomparable features on it. 360° x 180° video coverage and active participation from viewers are the best part of this vid...

  3. The Video Mesh: A Data Structure for Image-based Three-dimensional Video Editing

    OpenAIRE

    Chen, Jiawen; Paris, Sylvain; Wang, Jue; Matusik, Wojciech; Cohen, Michael; Durand, Fredo

    2011-01-01

    This paper introduces the video mesh, a data structure for representing video as 2.5D “paper cutouts.” The video mesh allows interactive editing of moving objects and modeling of depth, which enables 3D effects and post-exposure camera control. The video mesh sparsely encodes optical flow as well as depth, and handles occlusion using local layering and alpha mattes. Motion is described by a sparse set of points tracked over time. Each point also stores a depth value. The video mesh is a trian...

  4. Comprehensive Analysis and Evaluation of Background Subtraction Algorithms for Surveillance Video

    National Research Council Canada - National Science Library

    Yan Feng; Shengmei Luo; Yumin Tian; Shuo Deng; Haihong Zheng

    2014-01-01

    .... Then, the algorithms were implemented and tested using different videos with ground truth, such as baseline, dynamic background, camera jitter, and intermittent object motion and shadow scenarios...

  5. Real-time registration of video with ultrasound using stereo disparity

    Science.gov (United States)

    Wang, Jihang; Horvath, Samantha; Stetten, George; Siegel, Mel; Galeotti, John

    2012-02-01

    Medical ultrasound typically deals with the interior of the patient, with the exterior left to the original medical imaging modality, direct human vision. For the human operator scanning the patient, the view of the external anatomy is essential for correctly locating the ultrasound probe on the body and making sense of the resulting ultrasound images in their proper anatomical context. The operator, after all, is not expected to perform the scan with his eyes shut. Over the past decade, our laboratory has developed a method of fusing these two information streams in the mind of the operator, the Sonic Flashlight, which uses a half silvered mirror and miniature display mounted on an ultrasound probe to produce a virtual image within the patient at its proper location. We are now interested in developing a similar data fusion approach within the ultrasound machine itself, by, in effect, giving vision to the transducer. Our embodiment of this concept consists of an ultrasound probe with two small video cameras mounted on it, with software capable of locating the surface of an ultrasound phantom using stereo disparity between the two video images. We report its first successful operation, demonstrating a 3D rendering of the phantom's surface with the ultrasound data superimposed at its correct relative location. Eventually, automated analysis of these registered data sets may permit the scanner and its associated computational apparatus to interpret the ultrasound data within its anatomical context, much as the human operator does today.

  6. Simulating low-cost cameras for augmented reality compositing.

    Science.gov (United States)

    Klein, Georg; Murray, David W

    2010-01-01

    Video see-through Augmented Reality adds computer graphics to the real world in real time by overlaying graphics onto a live video feed. To achieve a realistic integration of the virtual and real imagery, the rendered images should have a similar appearance and quality to those produced by the video camera. This paper describes a compositing method which models the artifacts produced by a small low-cost camera, and adds these effects to an ideal pinhole image produced by conventional rendering methods. We attempt to model and simulate each step of the imaging process, including distortions, chromatic aberrations, blur, Bayer masking, noise, sharpening, and color-space compression, all while requiring only an RGBA image and an estimate of camera velocity as inputs.

  7. Evaluating and Implementing JPEG XR Optimized for Video Surveillance

    OpenAIRE

    Yu, Lang

    2010-01-01

    This report describes both evaluation and implementation of the new coming image compression standard JPEG XR. The intention is to determine if JPEG XR is an appropriate standard for IP based video surveillance purposes. Video surveillance, especially IP based video surveillance, currently has an increasing role in the security market. To be a good standard for surveillance, the video stream generated by the camera is required to be low bit-rate, low latency on the network and at the same tim...

  8. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  9. Intelligent video surveillance systems and technology

    CERN Document Server

    Ma, Yunqian

    2009-01-01

    From the streets of London to subway stations in New York City, hundreds of thousands of surveillance cameras ubiquitously collect hundreds of thousands of videos, often running 24/7. How can such vast volumes of video data be stored, analyzed, indexed, and searched? How can advanced video analysis and systems autonomously recognize people and detect targeted activities real-time? Collating and presenting the latest information Intelligent Video Surveillance: Systems and Technology explores these issues, from fundamentals principle to algorithmic design and system implementation.An Integrated

  10. Visual acuity, contrast sensitivity, and range performance with compressed motion video

    Science.gov (United States)

    Bijl, Piet; de Vries, Sjoerd C.

    2010-10-01

    Video of visual acuity (VA) and contrast sensitivity (CS) test charts in a complex background was recorded using a CCD color camera mounted on a computer-controlled tripod and was fed into real-time MPEG-2 compression/decompression equipment. The test charts were based on the triangle orientation discrimination (TOD) test method and contained triangle test patterns of different sizes and contrasts in four possible orientations. In a perception experiment, observers judged the orientation of the triangles in order to determine VA and CS thresholds at the 75% correct level. Three camera velocities (0, 1.0, and 2.0 deg/s, or 0, 4.1, and 8.1 pixels/frame) and four compression rates (no compression, 4 Mb/s, 2 Mb/s, and 1 Mb/s) were used. VA is shown to be rather robust to any combination of motion and compression. CS, however, dramatically decreases when motion is combined with high compression ratios. The measured thresholds were fed into the TOD target acquisition model to predict the effect of motion and compression on acquisition ranges for tactical military vehicles. The effect of compression on static performance is limited but strong with motion video. The data suggest that with the MPEG2 algorithm, the emphasis is on the preservation of image detail at the cost of contrast loss.

  11. A luminescence imaging system based on a CCD camera

    DEFF Research Database (Denmark)

    Duller, G.A.T.; Bøtter-Jensen, L.; Markey, B.G.

    1997-01-01

    described here has a maximum spatial resolution of 17 mu m; though this may be varied under software control to alter the signal-to-noise ratio. The camera has been mounted on a Riso automated TL/OSL reader, and both the reader and the CCD are under computer control. In the near u.v and blue part...

  12. The VISTA infrared camera

    Science.gov (United States)

    Dalton, G. B.; Caldwell, M.; Ward, A. K.; Whalley, M. S.; Woodhouse, G.; Edeson, R. L.; Clark, P.; Beard, S. M.; Gallie, A. M.; Todd, S. P.; Strachan, J. M. D.; Bezawada, N. N.; Sutherland, W. J.; Emerson, J. P.

    2006-06-01

    We describe the integration and test phase of the construction of the VISTA Infrared Camera, a 64 Megapixel, 1.65 degree field of view 0.9-2.4 micron camera which will soon be operating at the cassegrain focus of the 4m VISTA telescope. The camera incorporates sixteen IR detectors and six CCD detectors which are used to provide autoguiding and wavefront sensing information to the VISTA telescope control system.

  13. Streak camera meeting summary

    Energy Technology Data Exchange (ETDEWEB)

    Dolan, Daniel H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bliss, David E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    Streak cameras are important for high-speed data acquisition in single event experiments, where the total recorded information (I) is shared between the number of measurements (M) and the number of samples (S). Topics of this meeting included: streak camera use at the national laboratories; current streak camera production; new tube developments and alternative technologies; and future planning. Each topic is summarized in the following sections.

  14. Airborne Network Camera Standard

    Science.gov (United States)

    2015-06-01

    Optical Systems Group Document 466-15 AIRBORNE NETWORK CAMERA STANDARD DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE...Airborne Network Camera Standard 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER...without the focus of standardization for interoperable command and control, storage, and data streaming has been the airborne network camera systems used

  15. Helmet-Mounted Displays (HMD)

    Data.gov (United States)

    Federal Laboratory Consortium — The Helmet-Mounted Display labis responsible for monocular HMD day display evaluations; monocular HMD night vision performance processes; binocular HMD day display...

  16. Camera traps as sensor networks for monitoring animal communities

    OpenAIRE

    Kays, R.W.; Kranstauber, B.; Jansen, P.A.; C. Carbone; Rowcliffe, M.; Fountain, T; Tilak, S.

    2009-01-01

    Studying animal movement and distribution is of critical importance to addressing environmental challenges including invasive species, infectious diseases, climate and land-use change. Motion sensitive camera traps offer a visual sensor to record the presence of a species at a location, recording their movement in the Eulerian sense. Modern digital camera traps that record video present new analytical opportunities, but also new data management challenges. This paper describes our experience ...

  17. Camera Traps as Sensor Networks for Monitoring Animal Communities

    OpenAIRE

    Kays, R.W.; Tilak, S.; Kranstauber, B.; Jansen, P.A.; Carbone, C.; Rowcliff, M.J.; Fountain, T.; Eggert, J.; He, Z.

    2011-01-01

    Studying animal movement and distribution is of critical importance to addressing environmental challenges including invasive species, infectious diseases, climate and land-use change. Motion sensitive camera traps offer a visual sensor to record the presence of a broad range of species providing location – specific information on movement and behavior. Modern digital camera traps that record video present not only new analytical opportunities, but also new data management challenges. This pa...

  18. MICADO: the E-ELT adaptive optics imaging camera : The E-ELT adaptive optics imaging camera

    NARCIS (Netherlands)

    Davies, Richard; Ageorges, N.; Barl, L.; Bedin, L. R.; Bender, R.; Bernardi, P.; Chapron, F.; Clenet, Y.; Deep, A.; Deul, E.; Drost, M.; Eisenhauer, F.; Falomo, R.; Fiorentino, G.; Förster Schreiber, N. M.; Gendron, E.; Genzel, R.; Gratadour, D.; Greggio, L.; Grupp, F.; Held, E.; Herbst, T.; Hess, H.-J.; Hubert, Z.; Jahnke, K.; Kuijken, K.; Lutz, D.; Magrin, D.; Muschielok, B.; Navarro, R.; Noyola, E.; Paumard, T.; Piotto, G.; Ragazzoni, R.; Renzini, A.; Rousset, G.; Rix, H.-W.; Saglia, R.; Tacconi, L.; Thiel, M.; Tolstoy, E.; Trippe, S.; Tromp, N.; Valentijn, E. A.; Verdoes Kleijn, G.; Wegner, M.; McLean, I.S.; Ramsay, S.K.; Takami, H.

    MICADO is the adaptive optics imaging camera for the E-ELT. It has been designed and optimised to be mounted to the LGS-MCAO system MAORY, and will provide diffraction limited imaging over a wide (~1 arcmin) field of view. For initial operations, it can also be used with its own simpler AO module

  19. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  20. Forearm Trajectory Measurement during Pitching Motion using an Elbow-mounted Sensor

    Science.gov (United States)

    Sagawa, Koichi; Abo, Shuko; Tsukamoto, Toshiaki; Kondo, Izumi

    This paper describes a measurement method of three-dimensional (3D) forearm movement during pitching motion using an elbow-mounted sensor (3D sensor). The 3D sensor comprises accelerometers of two kinds with dynamic range of 4 [G] and 100 [G], and two kinds of gyroscopes with dynamic range of 300 [deg/s] and 4000 [deg/s], respectively, because the sensors used in measurement of sports activities require a wide dynamic range. The 3D sensor, attached on the forearm, measures 3D acceleration and angular velocity. The 3D trajectory of the forearm is estimated through double integration of the measured acceleration, which is transformed from the acceleration based on the system of moving coordinate on the forearm to that on the fixed system of coordinates. Because the estimated trajectory of the forearm is affected by the numerical integration of the measured data including errors, the 3D trajectory error is reduced by determining the position and posture of the forearm at the end of the pitching motion. Results of the pitching experiment show that the 3D trajectory and angle of the forearm estimated by the 3D sensor agree with those measured from a video camera image with an error margin of around 10 %.

  1. Geometric Calibration and Radiometric Correction of the Maia Multispectral Camera

    Science.gov (United States)

    Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D.

    2017-10-01

    Multispectral imaging is a widely used remote sensing technique, whose applications range from agriculture to environmental monitoring, from food quality check to cultural heritage diagnostic. A variety of multispectral imaging sensors are available on the market, many of them designed to be mounted on different platform, especially small drones. This work focuses on the geometric and radiometric characterization of a brand-new, lightweight, low-cost multispectral camera, called MAIA. The MAIA camera is equipped with nine sensors, allowing for the acquisition of images in the visible and near infrared parts of the electromagnetic spectrum. Two versions are available, characterised by different set of band-pass filters, inspired by the sensors mounted on the WorlView-2 and Sentinel2 satellites, respectively. The camera details and the developed procedures for the geometric calibrations and radiometric correction are presented in the paper.

  2. Comparison of vehicle-mounted forward-looking polarimetric infrared and downward-looking infrared sensors for landmine detection

    NARCIS (Netherlands)

    Cremer, F.; Schavemaker, J.G.M.; Jong, W. de; Schutte, K.

    2003-01-01

    This paper gives a comparison of two vehicle-mounted infrared systems for landmine detection. The first system is a down-ward looking standard infrared camera using processing methods developed within the EU project LOTUS. The second system is using a forward-looking polarimetric infrared camera.

  3. NOAA Polyline Shapefile - Drop Camera transects, US Caribbean - Western Puerto Rico - Project NF-07-06-USVI-HAB - (2007), UTM 19N NAD83 (NCEI Accession 0131853)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater photos and/or video that were collected by NOAA scientists using a drop camera system. Photos and/or video were...

  4. NOAA Polyline Shapefile - Drop Camera transects, US Caribbean - Western Puerto Rico - Project NF-07-06-USVI-HAB - (2007), UTM 19N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater photos and/or video that were collected by NOAA scientists using a drop camera system. Photos and/or video were...

  5. NOAA Point Shapefile - Drop Camera transects, US Caribbean - Western Puerto Rico - Project NF-07-06-USVI-HAB - (2007), UTM 19N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater photos and/or video that were collected by NOAA scientists using a drop camera system. Photos and/or video were...

  6. NOAA Point Shapefile - Drop Camera transects, US Caribbean - Western Puerto Rico - Project NF-07-06-USVI-HAB - (2007), UTM 19N NAD83 (NCEI Accession 0131853)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater photos and/or video that were collected by NOAA scientists using a drop camera system. Photos and/or video were...

  7. NOAA Shapefile - Drop Camera Transects Lines, USVI 2011 , Seafloor Characterization of the US Caribbean - Nancy Foster - NF-11-1 (2011), UTM 20N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video was collected between...

  8. NOAA Polyline Shapefile - Drop Camera transects, US Caribbean Virgin Passage and St. John Shelf - Project NF-03-10-USVI-HAB - (2010), UTM 20N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video was collected between...

  9. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...

  10. Measurement of thigmomorphogenesis and gravitropism by non-intrusive computerized video image processing

    Science.gov (United States)

    Jaffe, M. J.

    1984-01-01

    A video image processing instrument, DARWIN (Digital Analyser of Resolvable Whole-pictures by Image Numeration), was developed. It was programmed to measure stem or root growth and bending, and coupled to a specially mounted video camera to be able to automatically generate growth and bending curves during gravitropism. The growth of the plant is recorded on a video casette recorder with a specially modified time lapse function. At the end of the experiment, DARWIN analyses the growth or movement and prints out bending and growth curves. This system was used to measure thigmomorphagenesis in light grown corn plants. If the plant is rubbed with an applied force load of 0.38 N., it grows faster than the unrubbed control, whereas 1.14 N. retards its growth. Image analysis shows that most of the change in the rate of growth is caused in the first hour after rubbing. When DARWIN was used to measure gravitropism in dark grown oat seedlings, it was found that the top side of the shoot contracts during the first hour of gravitational stimulus, whereas the bottom side begins to elongate after 10 to 15 minutes.

  11. Intraocular camera for retinal prostheses: Refractive and diffractive lens systems

    Science.gov (United States)

    Hauer, Michelle Christine

    The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.

  12. Real-time moving objects detection and tracking from airborne infrared camera

    Science.gov (United States)

    Zingoni, Andrea; Diani, Marco; Corsini, Giovanni

    2017-10-01

    Detecting and tracking moving objects in real-time from an airborne infrared (IR) camera offers interesting possibilities in video surveillance, remote sensing and computer vision applications, such as monitoring large areas simultaneously, quickly changing the point of view on the scene and pursuing objects of interest. To fully exploit such a potential, versatile solutions are needed, but, in the literature, the majority of them works only under specific conditions about the considered scenario, the characteristics of the moving objects or the aircraft movements. In order to overcome these limitations, we propose a novel approach to the problem, based on the use of a cheap inertial navigation system (INS), mounted on the aircraft. To exploit jointly the information contained in the acquired video sequence and the data provided by the INS, a specific detection and tracking algorithm has been developed. It consists of three main stages performed iteratively on each acquired frame. The detection stage, in which a coarse detection map is computed, using a local statistic both fast to calculate and robust to noise and self-deletion of the targeted objects. The registration stage, in which the position of the detected objects is coherently reported on a common reference frame, by exploiting the INS data. The tracking stage, in which the steady objects are rejected, the moving objects are tracked, and an estimation of their future position is computed, to be used in the subsequent iteration. The algorithm has been tested on a large dataset of simulated IR video sequences, recreating different environments and different movements of the aircraft. Promising results have been obtained, both in terms of detection and false alarm rate, and in terms of accuracy in the estimation of position and velocity of the objects. In addition, for each frame, the detection and tracking map has been generated by the algorithm, before the acquisition of the subsequent frame, proving its

  13. Video signals integrator (VSI) system architecture

    Science.gov (United States)

    Kasprowicz, Grzegorz; Pastuszak, Grzegorz; Poźniak, Krzysztof; Trochimiuk, Maciej; Abramowski, Andrzej; Gaska, Michal; Bukowiecka, Danuta; Tyburska, Agata; Struniawski, Jarosław; Jastrzebski, Pawel; Jewartowski, Blazej; Frasunek, Przemysław; Nalbach-Moszynska, Małgorzata; Brawata, Sebastian; Bubak, Iwona; Gloza, Małgorzata

    2016-09-01

    The purpose of the project is development of a platform which integrates video signals from many sources. The signals can be sourced by existing analogue CCTV surveillance installations, recent internet-protocol (IP) cameras or single cameras of any type. The system will consist of portable devices that provide conversion, encoding, transmission and archiving. The sharing subsystem will use distributed file system and also user console which provides simultaneous access to any of video streams in real time. The system is fully modular so its extension is possible, both from hardware and software side. Due to standard modular technology used, partial technology modernization is also possible during a long exploitation period.

  14. Videography-Based Unconstrained Video Analysis.

    Science.gov (United States)

    Li, Kang; Li, Sheng; Oh, Sangmin; Fu, Yun

    2017-05-01

    Video analysis and understanding play a central role in visual intelligence. In this paper, we aim to analyze unconstrained videos, by designing features and approaches to represent and analyze videography styles in the videos. Videography denotes the process of making videos. The unconstrained videos are defined as the long duration consumer videos that usually have diverse editing artifacts and significant complexity of contents. We propose to construct a videography dictionary, which can be utilized to represent every video clip as a sequence of videography words. In addition to semantic features, such as foreground object motion and camera motion, we also incorporate two novel interpretable features to characterize videography, including the scale information and the motion correlations. We then demonstrate that, by using statistical analysis methods, the unique videography signatures extracted from different events can be automatically identified. For real-world applications, we explore the use of videography analysis for three types of applications, including content-based video retrieval, video summarization (both visual and textual), and videography-based feature pooling. In the experiments, we evaluate the performance of our approach and other methods on a large-scale unconstrained video dataset, and show that the proposed approach significantly benefits video analysis in various ways.

  15. Digital Camera Control for Faster Inspection

    Science.gov (United States)

    Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel

    2009-01-01

    Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.

  16. Detector Mount Design for IGRINS

    Directory of Open Access Journals (Sweden)

    Jae Sok Oh

    2014-06-01

    Full Text Available The Immersion Grating Infrared Spectrometer (IGRINS is a near-infrared wide-band high-resolution spectrograph jointly developed by the Korea Astronomy and Space Science Institute and the University of Texas at Austin. IGRINS employs three HAWAII-2RG Focal Plane Array (H2RG FPA detectors. We present the design and fabrication of the detector mount for the H2RG detector. The detector mount consists of a detector housing, an ASIC housing, a Field Flattener Lens (FFL mount, and a support base frame. The detector and the ASIC housing should be kept at 65 K and the support base frame at 130 K. Therefore they are thermally isolated by the support made of GFRP material. The detector mount is designed so that it has features of fine adjusting the position of the detector surface in the optical axis and of fine adjusting yaw and pitch angles in order to utilize as an optical system alignment compensator. We optimized the structural stability and thermal characteristics of the mount design using computer-aided 3D modeling and finite element analysis. Based on the structural and thermal analysis, the designed detector mount meets an optical stability tolerance and system thermal requirements. Actual detector mount fabricated based on the design has been installed into the IGRINS cryostat and successfully passed a vacuum test and a cold test.

  17. Kitt Peak speckle camera.

    Science.gov (United States)

    Breckinridge, J B; McAlister, H A; Robinson, W G

    1979-04-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  18. Mars Observer Camera

    OpenAIRE

    Malin, M. C.; Danielson, G. E.; Ingersoll, A. P.; Masursky, H.; J. Veverka(Massachusetts Institute of Technology, Cambridge, U.S.A.); Ravine, M. A.; Soulanille, T.A.

    1992-01-01

    The Mars Observer camera (MOC) is a three-component system (one narrow-angle and two wide-angle cameras) designed to take high spatial resolution pictures of the surface of Mars and to obtain lower spatial resolution, synoptic coverage of the planet's surface and atmosphere. The cameras are based on the “push broom” technique; that is, they do not take “frames” but rather build pictures, one line at a time, as the spacecraft moves around the planet in its orbit. MOC is primarily a telescope f...

  19. Camera as Cultural Critique

    DEFF Research Database (Denmark)

    Suhr, Christian

    2015-01-01

    What does the use of cameras entail for the production of cultural critique in anthropology? Visual anthropological analysis and cultural critique starts at the very moment a camera is brought into the field or existing visual images are engaged. The framing, distances, and interactions between...... researchers, cameras, and filmed subjects already inherently comprise analytical decisions. It is these ethnographic qualities inherent in audiovisual and photographic imagery that make it of particular value to a participatory anthropological enterprise that seeks to resist analytic closure and seeks instead...

  20. Digital Video Stabilization with Inertial Fusion

    OpenAIRE

    Freeman, William John

    2013-01-01

    As computing power becomes more and more available, robotic systems are moving away from active sensors for environmental awareness and transitioning into passive vision sensors.  With the advent of teleoperation and real-time video tracking of dynamic environments, the need to stabilize video onboard mobile robots has become more prevalent. This thesis presents a digital stabilization method that incorporates inertial fusion with a Kalman filter.  The camera motion is derived visually by tra...

  1. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    Directory of Open Access Journals (Sweden)

    Steven Nicholas Graves, MA

    2015-02-01

    Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  2. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...

  3. stil113_0401r -- Point coverage of locations of still frames extracted from video imagery which depict sediment types

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  4. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  5. TARGETLESS CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2012-09-01

    Full Text Available In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.

  6. A 3D-video-based computerized analysis of social and sexual interactions in rats.

    Science.gov (United States)

    Matsumoto, Jumpei; Urakawa, Susumu; Takamura, Yusaku; Malcher-Lopes, Renato; Hori, Etsuro; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao

    2013-01-01

    A large number of studies have analyzed social and sexual interactions between rodents in relation to neural activity. Computerized video analysis has been successfully used to detect numerous behaviors quickly and objectively; however, to date only 2D video recording has been used, which cannot determine the 3D locations of animals and encounters difficulties in tracking animals when they are overlapping, e.g., when mounting. To overcome these limitations, we developed a novel 3D video analysis system for examining social and sexual interactions in rats. A 3D image was reconstructed by integrating images captured by multiple depth cameras at different viewpoints. The 3D positions of body parts of the rats were then estimated by fitting skeleton models of the rats to the 3D images using a physics-based fitting algorithm, and various behaviors were recognized based on the spatio-temporal patterns of the 3D movements of the body parts. Comparisons between the data collected by the 3D system and those by visual inspection indicated that this system could precisely estimate the 3D positions of body parts for 2 rats during social and sexual interactions with few manual interventions, and could compute the traces of the 2 animals even during mounting. We then analyzed the effects of AM-251 (a cannabinoid CB1 receptor antagonist) on male rat sexual behavior, and found that AM-251 decreased movements and trunk height before sexual behavior, but increased the duration of head-head contact during sexual behavior. These results demonstrate that the use of this 3D system in behavioral studies could open the door to new approaches for investigating the neuroscience of social and sexual behavior.

  7. A 3D-video-based computerized analysis of social and sexual interactions in rats.

    Directory of Open Access Journals (Sweden)

    Jumpei Matsumoto

    Full Text Available A large number of studies have analyzed social and sexual interactions between rodents in relation to neural activity. Computerized video analysis has been successfully used to detect numerous behaviors quickly and objectively; however, to date only 2D video recording has been used, which cannot determine the 3D locations of animals and encounters difficulties in tracking animals when they are overlapping, e.g., when mounting. To overcome these limitations, we developed a novel 3D video analysis system for examining social and sexual interactions in rats. A 3D image was reconstructed by integrating images captured by multiple depth cameras at different viewpoints. The 3D positions of body parts of the rats were then estimated by fitting skeleton models of the rats to the 3D images using a physics-based fitting algorithm, and various behaviors were recognized based on the spatio-temporal patterns of the 3D movements of the body parts. Comparisons between the data collected by the 3D system and those by visual inspection indicated that this system could precisely estimate the 3D positions of body parts for 2 rats during social and sexual interactions with few manual interventions, and could compute the traces of the 2 animals even during mounting. We then analyzed the effects of AM-251 (a cannabinoid CB1 receptor antagonist on male rat sexual behavior, and found that AM-251 decreased movements and trunk height before sexual behavior, but increased the duration of head-head contact during sexual behavior. These results demonstrate that the use of this 3D system in behavioral studies could open the door to new approaches for investigating the neuroscience of social and sexual behavior.

  8. Solar panel parallel mounting configuration

    Science.gov (United States)

    Mutschler, Jr., Edward Charles (Inventor)

    1998-01-01

    A spacecraft includes a plurality of solar panels interconnected with a power coupler and an electrically operated device to provide power to the device when the solar cells are insolated. The solar panels are subject to bending distortion when entering or leaving eclipse. Spacecraft attitude disturbances are reduced by mounting each of the solar panels to an elongated boom made from a material with a low coefficient of thermal expansion, so that the bending of one panel is not communicated to the next. The boom may be insulated to reduce its bending during changes in insolation. A particularly advantageous embodiment mounts each panel to the boom with a single mounting, which may be a hinge. The single mounting prevents transfer of bending moments from the panel to the boom.

  9. Fast Picometer Mirror Mount Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovation is a 6DOF controllable mirror mount with high dynamic range and fast tip/tilt capability for space based applications. It will enable the...

  10. On the accuracy of a video-based drill-guidance solution for orthopedic and trauma surgery: preliminary results

    Science.gov (United States)

    Magaraggia, Jessica; Kleinszig, Gerhard; Wei, Wei; Weiten, Markus; Graumann, Rainer; Angelopoulou, Elli; Hornegger, Joachim

    2014-03-01

    Over the last years, several methods have been proposed to guide the physician during reduction and fixation of bone fractures. Available solutions often use bulky instrumentation inside the operating room (OR). The latter ones usually consist of a stereo camera, placed outside the operative field, and optical markers directly attached to both the patient and the surgical instrumentation, held by the surgeon. Recently proposed techniques try to reduce the required additional instrumentation as well as the radiation exposure to both patient and physician. In this paper, we present the adaptation and the first implementation of our recently proposed video camera-based solution for screw fixation guidance. Based on the simulations conducted in our previous work, we mounted a small camera on a drill in order to recover its tip position and axis orientation w.r.t our custom-made drill sleeve with attached markers. Since drill-position accuracy is critical, we thoroughly evaluated the accuracy of our implementation. We used an optical tracking system for ground truth data collection. For this purpose, we built a custom plate reference system and attached reflective markers to both the instrument and the plate. Free drilling was then performed 19 times. The position of the drill axis was continuously recovered using both our video camera solution and the tracking system for comparison. The recorded data covered targeting, perforation of the surface bone by the drill bit and bone drilling. The orientation of the instrument axis and the position of the instrument tip were recovered with an accuracy of 1:60 +/- 1:22° and 2:03 +/- 1:36 mm respectively.

  11. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Robotic and Security Systems Dept.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  12. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  13. Computational imaging for miniature cameras

    Science.gov (United States)

    Salahieh, Basel

    Miniature cameras play a key role in numerous imaging applications ranging from endoscopy and metrology inspection devices to smartphones and head-mount acquisition systems. However, due to the physical constraints, the imaging conditions, and the low quality of small optics, their imaging capabilities are limited in terms of the delivered resolution, the acquired depth of field, and the captured dynamic range. Computational imaging jointly addresses the imaging system and the reconstructing algorithms to bypass the traditional limits of optical systems and deliver better restorations for various applications. The scene is encoded into a set of efficient measurements which could then be computationally decoded to output a richer estimate of the scene as compared with the raw images captured by conventional imagers. In this dissertation, three task-based computational imaging techniques are developed to make low-quality miniature cameras capable of delivering realistic high-resolution reconstructions, providing full-focus imaging, and acquiring depth information for high dynamic range objects. For the superresolution task, a non-regularized direct superresolution algorithm is developed to achieve realistic restorations without being penalized by improper assumptions (e.g., optimizers, priors, and regularizers) made in the inverse problem. An adaptive frequency-based filtering scheme is introduced to upper bound the reconstruction errors while still producing more fine details as compared with previous methods under realistic imaging conditions. For the full-focus imaging task, a computational depth-based deconvolution technique is proposed to bring a scene captured by an ordinary fixed-focus camera to a full-focus based on a depth-variant point spread function prior. The ringing artifacts are suppressed on three levels: block tiling to eliminate boundary artifacts, adaptive reference maps to reduce ringing initiated by sharp edges, and block-wise deconvolution or

  14. Neutron cameras for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P. [ITER San Diego Joint Work Site, La Jolla, CA (United States)] [and others

    1998-12-31

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from {sup 16}N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with {sup 16}N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins.

  15. Mounting clips for panel installation

    Energy Technology Data Exchange (ETDEWEB)

    Cavieres, Andres; Al-Haddad, Tristan; Goodman, Joseph

    2017-07-11

    A photovoltaic panel mounting clip comprising a base, central indexing tabs, flanges, lateral indexing tabs, and vertical indexing tabs. The mounting clip removably attaches one or more panels to a beam or the like structure, both mechanically and electrically. It provides secure locking of the panels in all directions, while providing guidance in all directions for accurate installation of the panels to the beam or the like structure.

  16. Video surveillance using JPEG 2000

    Science.gov (United States)

    Dufaux, Frederic; Ebrahimi, Touradj

    2004-11-01

    This paper describes a video surveillance system which is composed of three key components, smart cameras, a server, and clients, connected through IP-networks in wired or wireless configurations. The system has been designed so as to protect the privacy of people under surveillance. Smart cameras are based on JPEG 2000 compression where an analysis module allows for events detection and regions of interest identification. The resulting regions of interest can then be encoded with better quality and scrambled. Compressed video streams are scrambled and signed for the purpose of privacy and data integrity verification using JPSEC compliant methods. The same bitstream may also be protected for robustness to transmission errors based on JPWL compliant methods. The server receives, stores, manages and transmits the video sequences on wired and wireless channels to a variety of clients and users with different device capabilities, channel characteristics and preferences. Use of seamless scalable coding of video sequences prevents any need for transcoding operations at any point in the system.

  17. The head-mounted microscope.

    Science.gov (United States)

    Chen, Ting; Dailey, Seth H; Naze, Sawyer A; Jiang, Jack J

    2012-04-01

    Microsurgical equipment has greatly advanced since the inception of the microscope into the operating room. These advancements have allowed for superior surgical precision and better post-operative results. This study focuses on the use of the Leica HM500 head-mounted microscope for the operating phonosurgeon. The head-mounted microscope has an optical zoom from 2× to 9× and provides a working distance from 300 mm to 700 mm. The headpiece, with its articulated eyepieces, adjusts easily to head shape and circumference, and offers a focus function, which is either automatic or manually controlled. We performed five microlaryngoscopic operations utilizing the head-mounted microscope with successful results. By creating a more ergonomically favorable operating posture, a surgeon may be able to obtain greater precision and success in phonomicrosurgery. Phonomicrosurgery requires the precise manipulation of long-handled cantilevered instruments through the narrow bore of a laryngoscope. The head-mounted microscope shortens the working distance compared with a stand microscope, thereby increasing arm stability, which may improve surgical precision. Also, the head-mounted design permits flexibility in head position, enabling operator comfort, and delaying musculoskeletal fatigue. A head-mounted microscope decreases the working distance and provides better ergonomics in laryngoscopic microsurgery. These advances provide the potential to promote precision in phonomicrosurgery. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.

  18. Limits on surveillance: frictions, fragilities and failures in the operation of camera surveillance.

    NARCIS (Netherlands)

    Dubbeld, L.

    2004-01-01

    Public video surveillance tends to be discussed in either utopian or dystopian terms: proponents maintain that camera surveillance is the perfect tool in the fight against crime, while critics argue that the use of security cameras is central to the development of a panoptic, Orwellian surveillance

  19. Declarative camera control for automatic cinematography

    Energy Technology Data Exchange (ETDEWEB)

    Christianson, D.B.; Anderson, S.E.; Li-wei He [Univ. of Washington, Seattle, WA (United States)] [and others

    1996-12-31

    Animations generated by interactive 3D computer graphics applications are typically portrayed either from a particular character`s point of view or from a small set of strategically-placed viewpoints. By ignoring camera placement, such applications fail to realize important storytelling capabilities that have been explored by cinematographers for many years. In this paper, we describe several of the principles of cinematography and show how they can be formalized into a declarative language, called the Declarative Camera Control Language (DCCL). We describe the application of DCCL within the context of a simple interactive video game and argue that DCCL represents cinematic knowledge at the same level of abstraction as expert directors by encoding 16 idioms from a film textbook. These idioms produce compelling animations, as demonstrated on the accompanying videotape.

  20. Video enhancement effectiveness for target detection

    Science.gov (United States)

    Simon, Michael; Fischer, Amber; Petrov, Plamen

    2011-05-01

    Unmanned aerial vehicles (UAVs) capture real-time video data of military targets while keeping the warfighter at a safe distance. This keeps soldiers out of harm's way while they perform intelligence, surveillance and reconnaissance (ISR) and close-air support troops in contact (CAS-TIC) situations. The military also wants to use UAV video to achieve force multiplication. One method of achieving effective force multiplication involves fielding numerous UAVs with cameras and having multiple videos processed simultaneously by a single operator. However, monitoring multiple video streams is difficult for operators when the videos are of low quality. To address this challenge, we researched several promising video enhancement algorithms that focus on improving video quality. In this paper, we discuss our video enhancement suite and provide examples of video enhancement capabilities, focusing on stabilization, dehazing, and denoising. We provide results that show the effects of our enhancement algorithms on target detection and tracking algorithms. These results indicate that there is potential to assist the operator in identifying and tracking relevant targets with aided target recognition even on difficult video, increasing the force multiplier effect of UAVs. This work also forms the basis for human factors research into the effects of enhancement algorithms on ISR missions.

  1. Sensor 17 Thermal Isolation Mounting Structure (TIMS) Design Improvements

    Energy Technology Data Exchange (ETDEWEB)

    Enstrom, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-09-04

    The SENSOR 17 thermographic camera weighs approximately 0.5lbs, has a fundamental mode of 167 Hz, and experiences 0.75W of heat leakage in through the TIMS. The configuration, shown in Figure 1, is comprised of four 300 Series SST washers paired in tandem with P.E.I (Ultem 100) washers. The SENSOR 17 sensor is mounted to a 300 series stainless plate with A-shaped arms. The Plate can be assumed to be at ambient temperatures (≈293K) and the I.R. Mount needs to be cooled to 45K. It is attached to the tip of a cryocooler by a ‘cold strap’ and is assumed to be at the temperature of the cold-strap (≈45K). During flights SENSOR 17 experiences excitations at frequencies centered around 10-30Hz, 60Hz, and 120Hz from the aircraft flight environment. The temporal progression described below depicts the 1st Modal shape at the systems resonant frequency. This simulation indicates Modal articulation will cause a pitch rate of the camera with respect to the body axis of the airplane. This articulation shows up as flutter in the camera.

  2. Summarization of Surveillance Video Sequences Using Face Quality Assessment

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.; Rahmati, Mohammad

    2011-01-01

    Constant working surveillance cameras in public places, such as airports and banks, produce huge amount of video data. Faces in such videos can be extracted in real time. However, most of these detected faces are either redundant or useless. Redundant information adds computational costs to facial...

  3. What Video Styles can do for User Research

    DEFF Research Database (Denmark)

    Blauhut, Daniela; Buur, Jacob

    2009-01-01

    the video camera actually plays in studying people and establishing design collaboration still exists. In this paper we argue that traditional documentary film approaches like Direct Cinema and Cinéma Vérité show that a purely observational approach may not be most valuable for user research and that video...

  4. Content Area Vocabulary Videos in Multiple Contexts: A Pedagogical Tool

    Science.gov (United States)

    Webb, C. Lorraine; Kapavik, Robin Robinson

    2015-01-01

    The authors challenged pre-service teachers to digitally define a social studies or mathematical vocabulary term in multiple contexts using a digital video camera. The researchers sought to answer the following questions: 1. How will creating a video for instruction affect pre-service teachers' attitudes about teaching with technology, if at all?…

  5. Cellphones in Classrooms Land Teachers on Online Video Sites

    Science.gov (United States)

    Honawar, Vaishali

    2007-01-01

    Videos of teachers that students taped in secrecy are all over online sites like YouTube and MySpace. Angry teachers, enthusiastic teachers, teachers clowning around, singing, and even dancing are captured, usually with camera phones, for the whole world to see. Some students go so far as to create elaborately edited videos, shot over several…

  6. Building 3D Event Logs for Video Investigation

    NARCIS (Netherlands)

    Dang, T.K.; Worring, M.; Bui, T.D.

    2015-01-01

    In scene investigation, creating a video log captured using a handheld camera is more convenient and more complete than taking photos and notes. By introducing video analysis and computer vision techniques, it is possible to build a spatio-temporal representation of the investigation. Such a

  7. Automated Video Quality Assessment for Deep-Sea Video

    Science.gov (United States)

    Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.

    2015-12-01

    Video provides a rich source of data for geophysical analysis, often supplying detailed information about the environment when other instruments may not. This is especially true of deep-sea environments, where direct visual observations cannot be made. As computer vision techniques improve and volumes of video data increase, automated video analysis is emerging as a practical alternative to labor-intensive manual analysis. Automated techniques can be much more sensitive to video quality than their manual counterparts, so performing quality assessment before doing full analysis is critical to producing valid results.Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories that supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea, including video and still cameras. This network of ocean observatories has produced almost 20,000 hours of video (about 38 hours are recorded each day) and an additional 8,000 hours of logs from remotely operated vehicle (ROV) dives. We begin by surveying some ways in which deep-sea video poses challenges for automated analysis, including: 1. Non-uniform lighting: Single, directional, light sources produce uneven luminance distributions and shadows; remotely operated lighting equipment are also susceptible to technical failures. 2. Particulate noise: Turbidity and marine snow are often present in underwater video; particles in the water column can have sharper focus and higher contrast than the objects of interest due to their proximity to the light source and can also influence the camera's autofocus and auto white-balance routines. 3. Color distortion (low contrast): The rate of absorption of light in water varies by wavelength, and is higher overall than in air, altering apparent colors and lowering the contrast of objects at a distance.We also describe measures under development at ONC for detecting and mitigating

  8. Overview of SWIR detectors, cameras, and applications

    Science.gov (United States)

    Hansen, Marc P.; Malchow, Douglas S.

    2008-03-01

    Imaging in the short wave infrared (SWIR) can bring useful contrast to situations and applications where visible or thermal imaging cameras are ineffective. This paper will define the short wave infrared technology and discuss developing imaging applications; then describe newly available 2-D (area) and 1-D (linear) arrays made with indium-gallium-arsenide (InGaAs), while presenting the wide range of applications with images and videos. Applications mentioned will be web inspection of continuous processes such as high temperature manufacturing processes, agricultural raw material cleaning and sorting, plastics recycling of automotive and consumer products, and a growing biological imaging technique, Spectral-Domain Optical Coherence Tomography.

  9. Measurement of the nonuniformity of first responder thermal imaging cameras

    Science.gov (United States)

    Lock, Andrew; Amon, Francine

    2008-04-01

    Police, firefighters, and emergency medical personnel are examples of first responders that are utilizing thermal imaging cameras in a very practical way every day. However, few performance metrics have been developed to assist first responders in evaluating the performance of thermal imaging technology. This paper describes one possible metric for evaluating the nonuniformity of thermal imaging cameras. Several commercially available uncooled focal plane array cameras were examined. Because of proprietary property issues, each camera was considered a 'black box'. In these experiments, an extended area black body (18 cm square) was placed very close to the objective lens of the thermal imaging camera. The resultant video output from the camera was digitized at a resolution of 640x480 pixels and a grayscale depth of 10 bits. The nonuniformity was calculated using the standard deviation of the digitized image pixel intensities divided by the mean of those pixel intensities. This procedure was repeated for each camera at several blackbody temperatures in the range from 30° C to 260° C. It has observed that the nonuniformity initially increases with temperature, then asymptotically approaches a maximum value. Nonuniformity is also applied to the calculation of Spatial Frequency Response as well providing a noise floor. The testing procedures described herein are being developed as part of a suite of tests to be incorporated into a performance standard covering thermal imaging cameras for first responders.

  10. Solar panel truss mounting systems and methods

    Science.gov (United States)

    Al-Haddad, Tristan Farris; Cavieres, Andres; Gentry, Russell; Goodman, Joseph; Nolan, Wade; Pitelka, Taylor; Rahimzadeh, Keyan; Brooks, Bradley; Lohr, Joshua; Crooks, Ryan; Porges, Jamie; Rubin, Daniel

    2015-10-20

    An exemplary embodiment of the present invention provides a solar panel truss mounting system comprising a base and a truss assembly coupled to the base. The truss assembly comprises a first panel rail mount, second panel rail mount parallel to the first panel rail mount, base rail mount parallel to the first and second panel rail mounts, and a plurality of support members. A first portion of the plurality of support members extends between the first and second panel rail mounts. A second portion of the plurality of support members extends between the first panel rail mount and the base rail mount. A third portion of the plurality of support members extends between the second panel rail mount and the base rail mount. The system can further comprise a plurality of connectors for coupling a plurality of photovoltaic solar panels to the truss assembly.

  11. Solar panel truss mounting systems and methods

    Science.gov (United States)

    Al-Haddad, Tristan Farris; Cavieres, Andres; Gentry, Russell; Goodman, Joseph; Nolan, Wade; Pitelka, Taylor; Rahimzadeh, Keyan; Brooks, Bradley; Lohr, Joshua; Crooks, Ryan; Porges, Jamie; Rubin, Daniel

    2016-06-28

    An exemplary embodiment of the present invention provides a solar panel truss mounting system comprising a base and a truss assembly coupled to the base. The truss assembly comprises a first panel rail mount, second panel rail mount parallel to the first panel rail mount, base rail mount parallel to the first and second panel rail mounts, and a plurality of support members. A first portion of the plurality of support members extends between the first and second panel rail mounts. A second portion of the plurality of support members extends between the first panel rail mount and the base rail mount. A third portion of the plurality of support members extends between the second panel rail mount and the base rail mount. The system can further comprise a plurality of connectors for coupling a plurality of photovoltaic solar panels to the truss assembly.

  12. Solar panel truss mounting systems and methods

    Energy Technology Data Exchange (ETDEWEB)

    Al-Haddad, Tristan Farris; Cavieres, Andres; Gentry, Russell; Goodman, Joseph; Nolan, Wade; Pitelka, Taylor; Rahimzadeh, Keyan; Brooks, Bradley; Lohr, Joshua; Crooks, Ryan; Porges, Jamie; Rubin, Daniel

    2018-01-30

    An exemplary embodiment of the present invention provides a solar panel truss mounting system comprising a base and a truss assembly coupled to the base. The truss assembly comprises a first panel rail mount, second panel rail mount parallel to the first panel rail mount, base rail mount parallel to the first and second panel rail mounts, and a plurality of support members. A first portion of the plurality of support members extends between the first and second panel rail mounts. A second portion of the plurality of support members extends between the first panel rail mount and the base rail mount. A third portion of the plurality of support members extends between the second panel rail mount and the base rail mount. The system can further comprise a plurality of connectors for coupling a plurality of photovoltaic solar panels to the truss assembly.

  13. The VISTA IR camera

    Science.gov (United States)

    Dalton, Gavin B.; Caldwell, Martin; Ward, Kim; Whalley, Martin S.; Burke, Kevin; Lucas, John M.; Richards, Tony; Ferlet, Marc; Edeson, Ruben L.; Tye, Daniel; Shaughnessy, Bryan M.; Strachan, Mel; Atad-Ettedgui, Eli; Leclerc, Melanie R.; Gallie, Angus; Bezawada, Nagaraja N.; Clark, Paul; Bissonauth, Nirmal; Luke, Peter; Dipper, Nigel A.; Berry, Paul; Sutherland, Will; Emerson, Jim

    2004-09-01

    The VISTA IR Camera has now completed its detailed design phase and is on schedule for delivery to ESO"s Cerro Paranal Observatory in 2006. The camera consists of 16 Raytheon VIRGO 2048x2048 HgCdTe arrays in a sparse focal plane sampling a 1.65 degree field of view. A 1.4m diameter filter wheel provides slots for 7 distinct science filters, each comprising 16 individual filter panes. The camera also provides autoguiding and curvature sensing information for the VISTA telescope, and relies on tight tolerancing to meet the demanding requirements of the f/1 telescope design. The VISTA IR camera is unusual in that it contains no cold pupil-stop, but rather relies on a series of nested cold baffles to constrain the light reaching the focal plane to the science beam. In this paper we present a complete overview of the status of the final IR Camera design, its interaction with the VISTA telescope, and a summary of the predicted performance of the system.

  14. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  15. Using Photogrammetry to Estimate Tank Waste Volumes from Video

    Energy Technology Data Exchange (ETDEWEB)

    Field, Jim G. [Washington River Protection Solutions, LLC, Richland, WA (United States)

    2013-03-27

    Washington River Protection Solutions (WRPS) contracted with HiLine Engineering & Fabrication, Inc. to assess the accuracy of photogrammetry tools as compared to video Camera/CAD Modeling System (CCMS) estimates. This test report documents the results of using photogrammetry to estimate the volume of waste in tank 241-C-I04 from post-retrieval videos and results using photogrammetry to estimate the volume of waste piles in the CCMS test video.

  16. Active learning in camera calibration through vision measurement application

    Science.gov (United States)

    Li, Xiaoqin; Guo, Jierong; Wang, Xianchun; Liu, Changqing; Cao, Binfang

    2017-08-01

    Since cameras are increasingly more used in scientific application as well as in the applications requiring precise visual information, effective calibration of such cameras is getting more important. There are many reasons why the measurements of objects are not accurate. The largest reason is that the lens has a distortion. Another detrimental influence on the evaluation accuracy is caused by the perspective distortions in the image. They happen whenever we cannot mount the camera perpendicularly to the objects we want to measure. In overall, it is very important for students to understand how to correct lens distortions, that is camera calibration. If the camera is calibrated, the images are rectificated, and then it is possible to obtain undistorted measurements in world coordinates. This paper presents how the students should develop a sense of active learning for mathematical camera model besides the theoretical scientific basics. The authors will present the theoretical and practical lectures which have the goal of deepening the students understanding of the mathematical models of area scan cameras and building some practical vision measurement process by themselves.

  17. A tiny VIS-NIR snapshot multispectral camera

    Science.gov (United States)

    Geelen, Bert; Blanch, Carolina; Gonzalez, Pilar; Tack, Nicolaas; Lambrechts, Andy

    2015-03-01

    Spectral imaging can reveal a lot of hidden details about the world around us, but is currently confined to laboratory environments due to the need for complex, costly and bulky cameras. Imec has developed a unique spectral sensor concept in which the spectral unit is monolithically integrated on top of a standard CMOS image sensor at wafer level, hence enabling the design of compact, low cost and high acquisition speed spectral cameras with a high design flexibility. This flexibility has previously been demonstrated by imec in the form of three spectral camera architectures: firstly a high spatial and spectral resolution scanning camera, secondly a multichannel snapshot multispectral camera and thirdly a per-pixel mosaic snapshot spectral camera. These snapshot spectral cameras sense an entire multispectral data cube at one discrete point in time, extending the domain of spectral imaging towards dynamic, video-rate applications. This paper describes the integration of our per-pixel mosaic snapshot spectral sensors inside a tiny, portable and extremely user-friendly camera. Our prototype demonstrator cameras can acquire multispectral image cubes, either of 272x512 pixels over 16 bands in the VIS (470-620nm) or of 217x409 pixels over 25 bands in the VNIR (600-900nm) at 170 cubes per second for normal machine vision illumination levels. The cameras themselves are extremely compact based on Ximea xiQ cameras, measuring only 26x26x30mm, and can be operated from a laptop-based USB3 connection, making them easily deployable in very diverse environments.

  18. HIGH SPEED CAMERA

    Science.gov (United States)

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  19. 3-D Flow Visualization with a Light-field Camera

    Science.gov (United States)

    Thurow, B.

    2012-12-01

    Light-field cameras have received attention recently due to their ability to acquire photographs that can be computationally refocused after they have been acquired. In this work, we describe the development of a light-field camera system for 3D visualization of turbulent flows. The camera developed in our lab, also known as a plenoptic camera, uses an array of microlenses mounted next to an image sensor to resolve both the position and angle of light rays incident upon the camera. For flow visualization, the flow field is seeded with small particles that follow the fluid's motion and are imaged using the camera and a pulsed light source. The tomographic MART algorithm is then applied to the light-field data in order to reconstruct a 3D volume of the instantaneous particle field. 3D, 3C velocity vectors are then determined from a pair of 3D particle fields using conventional cross-correlation algorithms. As an illustration of the concept, 3D/3C velocity measurements of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. Future experiments are planned to use the camera to study the influence of wall permeability on the 3-D structure of the turbulent boundary layer.Schematic illustrating the concept of a plenoptic camera where each pixel represents both the position and angle of light rays entering the camera. This information can be used to computationally refocus an image after it has been acquired. Instantaneous 3D velocity field of a turbulent boundary layer determined using light-field data captured by a plenoptic camera.

  20. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet er...... der dog opstået nye muligheder og udfordringer i forhold til at formidle og distribuere forskningsresultater til forskellige målgrupper via video. Samtidig er klassiske metodologiske problematikker som forskerens positionering i forhold til det undersøgte stadig aktuelle. Både klassiske og nye...... problemstillinger diskuteres i kapitlet, som rammesætter diskussionen ud fra forskellige positioneringsmuligheder: formidler, historiefortæller, eller dialogist. Disse positioner relaterer sig til genrer inden for ’akademisk video’. Afslutningsvis præsenteres en metodisk værktøjskasse med redskaber til planlægning...

  1. Wide angle pinhole camera

    Science.gov (United States)

    Franke, J. M.

    1978-01-01

    Hemispherical refracting element gives pinhole camera 180 degree field-of-view without compromising its simplicity and depth-of-field. Refracting element, located just behind pinhole, bends light coming in from sides so that it falls within image area of film. In contrast to earlier pinhole cameras that used water or other transparent fluids to widen field, this model is not subject to leakage and is easily loaded and unloaded with film. Moreover, by selecting glass with different indices of refraction, field at film plane can be widened or reduced.

  2. Mounting clips for panel installation

    Science.gov (United States)

    Cavieres, Andres; Al-Haddad, Tristan; Goodman, Joseph; Valdes, Francisco

    2017-02-14

    An exemplary mounting clip for removably attaching panels to a supporting structure comprises a base, spring locking clips, a lateral flange, a lever flange, and a spring bonding pad. The spring locking clips extend upwardly from the base. The lateral flange extends upwardly from a first side of the base. The lateral flange comprises a slot having an opening configured to receive at least a portion of one of the one or more panels. The lever flange extends outwardly from the lateral flange. The spring bonding flange extends downwardly from the lever flange. At least a portion of the first spring bonding flange comprises a serrated edge for gouging at least a portion of the one or more panels when the one or more panels are attached to the mounting clip to electrically and mechanically couple the one or more panels to the mounting clip.

  3. Quick-disconnect harness system for helmet-mounted displays

    Science.gov (United States)

    Bapu, P. T.; Aulds, M. J.; Fuchs, Steven P.; McCormick, David M.

    1992-10-01

    We have designed a pilot's harness-mounted, high voltage quick-disconnect connectors with 62 pins, to transmit voltages up to 13.5 kV and video signals with 70 MHz bandwidth, for a binocular helmet-mounted display system. It connects and disconnects with power off, and disconnects 'hot' without pilot intervention and without producing external sparks or exposing hot embers to the explosive cockpit environment. We have implemented a procedure in which the high voltage pins disconnect inside a hermetically-sealed unit before the physical separation of the connector. The 'hot' separation triggers a crowbar circuit in the high voltage power supplies for additional protection. Conductor locations and shields are designed to reduce capacitance in the circuit and avoid crosstalk among adjacent circuits. The quick- disconnect connector and wiring harness are human-engineered to ensure pilot safety and mobility. The connector backshell is equipped with two hybrid video amplifiers to improve the clarity of the video signals. Shielded wires and coaxial cables are molded as a multi-layered ribbon for maximum flexibility between the pilot's harness and helmet. Stiff cabling is provided between the quick-disconnect connector and the aircraft console to control behavior during seat ejection. The components of the system have been successfully tested for safety, performance, ergonomic considerations, and reliability.

  4. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...

  5. From different angles : Exploring and applying the design potential of video

    NARCIS (Netherlands)

    Pasman, G.J.

    2012-01-01

    Recent developments in both hardware and software have brought video within the scope of design students as a new visual design tool. Being more and more equipped with cameras, for example in their smartphones, and video editing programs on their computers, they are increasing using video to record

  6. The Mount Wilson Optical Shop during the Second World War

    Science.gov (United States)

    Abrahams, P.

    2004-12-01

    During the Second World War, the Optical Shop of Mount Wilson Observatory, located in Pasadena, engaged in a variety of exacting and pioneering ventures in optical design and fabrication. Roof prisms for military optics were produced on a large scale, leading to the production of an instruction manual, for guidance in other workshops. Triple mirrors, or autocollimating corner cubes, were another precision part made in large numbers. Aerial photography was extensively developed. Test procedures for measuring resolution of lenses were researched. Various camera shutters and film sweep mechanisms were devised. The most significant work concerned Schmidt cameras, for possible use in night-time aerial photography. Variations included a solid Schmidt, and the Schmidt Cassegrain, which was fabricated for the first time at MWO. Key figures include Don Hendrix, Roger Hayward, Aden Meinel, and Walter Adams.

  7. Compact video synopsis via global spatiotemporal optimization.

    Science.gov (United States)

    Nie, Yongwei; Xiao, Chunxia; Sun, Hanqiu; Li, Ping

    2013-10-01

    Video synopsis aims at providing condensed representations of video data sets that can be easily captured from digital cameras nowadays, especially for daily surveillance videos. Previous work in video synopsis usually moves active objects along the time axis, which inevitably causes collisions among the moving objects if compressed much. In this paper, we propose a novel approach for compact video synopsis using a unified spatiotemporal optimization. Our approach globally shifts moving objects in both spatial and temporal domains, which shifting objects temporally to reduce the length of the video and shifting colliding objects spatially to avoid visible collision artifacts. Furthermore, using a multilevel patch relocation (MPR) method, the moving space of the original video is expanded into a compact background based on environmental content to fit with the shifted objects. The shifted objects are finally composited with the expanded moving space to obtain the high-quality video synopsis, which is more condensed while remaining free of collision artifacts. Our experimental results have shown that the compact video synopsis we produced can be browsed quickly, preserves relative spatiotemporal relationships, and avoids motion collisions.

  8. Privacy-protecting video surveillance

    Science.gov (United States)

    Wickramasuriya, Jehan; Alhazzazi, Mohanned; Datt, Mahesh; Mehrotra, Sharad; Venkatasubramanian, Nalini

    2005-02-01

    Forms of surveillance are very quickly becoming an integral part of crime control policy, crisis management, social control theory and community consciousness. In turn, it has been used as a simple and effective solution to many of these problems. However, privacy-related concerns have been expressed over the development and deployment of this technology. Used properly, video cameras help expose wrongdoing but typically come at the cost of privacy to those not involved in any maleficent activity. This work describes the design and implementation of a real-time, privacy-protecting video surveillance infrastructure that fuses additional sensor information (e.g. Radio-frequency Identification) with video streams and an access control framework in order to make decisions about how and when to display the individuals under surveillance. This video surveillance system is a particular instance of a more general paradigm of privacy-protecting data collection. In this paper we describe in detail the video processing techniques used in order to achieve real-time tracking of users in pervasive spaces while utilizing the additional sensor data provided by various instrumented sensors. In particular, we discuss background modeling techniques, object tracking and implementation techniques that pertain to the overall development of this system.

  9. Online scene change detection of multicast (MBone) video

    Science.gov (United States)

    Zhou, Wensheng; Shen, Ye; Vellaikal, Asha; Kuo, C.-C. Jay

    1998-10-01

    Many multimedia applications, such as multimedia data management systems and communication systems, require efficient representation of multimedia content. Thus semantic interpretation of video content has been a popular research area. Currently, most content-based video representation involves the segmentation of video based on key frames which are generated using scene change detection techniques as well as camera/object motion. Then, video features can be extracted from key frames. However most of such research performs off-line video processing in which the whole video scope is known as a priori which allows multiple scans of the stored video files during video processing. In comparison, relatively not much research has been done in the area of on-line video processing, which is crucial in video communication applications such as on-line collaboration, news broadcasts and so on. Our research investigates on-line real-time scene change detection of multicast video over the Internet. Our on-line processing system are designed to meet the requirements of real-time video multicasting over the Internet and to utilize the successful video parsing techniques available today. The proposed algorithms extract key frames from video bitstreams sent through the MBone network, and the extracted key frames are multicasted as annotations or metadata over a separate channel to assist in content filtering such as those anticipated to be in use by on-line filtering proxies in the Internet. The performance of the proposed algorithms are demonstrated and discussed in this paper.

  10. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  11. The canopy camera

    Science.gov (United States)

    Harry E. Brown

    1962-01-01

    The canopy camera is a device of new design that takes wide-angle, overhead photographs of vegetation canopies, cloud cover, topographic horizons, and similar subjects. Since the entire hemisphere is photographed in a single exposure, the resulting photograph is circular, with the horizon forming the perimeter and the zenith the center. Photographs of this type provide...

  12. Differential geometry measures of nonlinearity for the video tracking problem

    Science.gov (United States)

    Mallick, Mahendra; La Scala, Barbara F.

    2006-05-01

    Tracking people and vehicles in an urban environment using video cameras onboard unmanned aerial vehicles has drawn a great deal of interest in recent years due to their low cost compared with expensive radar systems. Video cameras onboard a number of small UAVs can provide inexpensive, effective, and highly flexible airborne intelligence, surveillance and reconnaissance as well as situational awareness functions. The perspective transformation is a commonly used general measurement model for the video camera when the variation in terrain height in the object scene is not negligible and the distance between the camera and the scene is not large. The perspective transformation is a nonlinear function of the object position. Most video tracking applications use a nearly constant velocity model (NCVM) of the target in the local horizontal plane. The filtering problem is nonlinear due to nonlinearity in the measurement model. In this paper, we present algorithms for quantifying the degree of nonlinearity (DoN) by calculating the differential geometry based parameter-effects curvature and intrinsic curvature measures of nonlinearity for the video tracking problem. We use the constant velocity model (CVM) of a target in 2D with simulated video measurements in the image plane. We have presented preliminary results using 200 Monte Carlo simulations and future work will focus on detailed numerical results. Our results for the chosen video tracking problem indicate that the DoN is low and therefore, we expect the extended Kalman filter to be reasonable choice.

  13. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real W...

  14. CCD Camera Detection of HIV Infection.

    Science.gov (United States)

    Day, John R

    2017-01-01

    Rapid and precise quantification of the infectivity of HIV is important for molecular virologic studies, as well as for measuring the activities of antiviral drugs and neutralizing antibodies. An indicator cell line, a CCD camera, and image-analysis software are used to quantify HIV infectivity. The cells of the P4R5 line, which express the receptors for HIV infection as well as β-galactosidase under the control of the HIV-1 long terminal repeat, are infected with HIV and then incubated 2 days later with X-gal to stain the infected cells blue. Digital images of monolayers of the infected cells are captured using a high resolution CCD video camera and a macro video zoom lens. A software program is developed to process the images and to count the blue-stained foci of infection. The described method allows for the rapid quantification of the infected cells over a wide range of viral inocula with reproducibility, accuracy and at relatively low cost.

  15. NOAA Polyline Shapefile - Drop Camera transects, US Caribbean – Virgin Passage and St. John Shelf - Project NF-03-10-USVI-HAB - (2010), UTM 20N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video was collected between...

  16. -NOAA Shapefile - Drop Camera Transects Lines, USVI 2011 , Seafloor Characterization of the US Caribbean - Nancy Foster - NF-11-1 (2011), UTM 20N NAD83 (NCEI Accession 0131858)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video was collected between...

  17. NOAA Polyline Shapefile - Drop Camera transects, US Caribbean – Virgin Passage and St. John Shelf - Project NF-03-10-USVI-HAB - (2010), UTM 20N NAD83 (NCEI Accession 0131854)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video was collected between...

  18. Control of Wall Mounting Robot

    DEFF Research Database (Denmark)

    Sloth, Christoffer; Pedersen, Rasmus

    2017-01-01

    This paper presents a method for designing controllers for trajectory tracking with actuator constraints. In particular, we consider a joystick-controlled wall mounting robot called WallMo. In contrast to previous works, a model-free approach is taken to the control problem, where the path...

  19. Mount Rainier active cascade volcano

    Science.gov (United States)

    1994-01-01

    Mount Rainier is one of about two dozen active or recently active volcanoes in the Cascade Range, an arc of volcanoes in the northwestern United States and Canada. The volcano is located about 35 kilometers southeast of the Seattle-Tacoma metropolitan area, which has a population of more than 2.5 million. This metropolitan area is the high technology industrial center of the Pacific Northwest and one of the commercial aircraft manufacturing centers of the United States. The rivers draining the volcano empty into Puget Sound, which has two major shipping ports, and into the Columbia River, a major shipping lane and home to approximately a million people in southwestern Washington and northwestern Oregon. Mount Rainier is an active volcano. It last erupted approximately 150 years ago, and numerous large floods and debris flows have been generated on its slopes during this century. More than 100,000 people live on the extensive mudflow deposits that have filled the rivers and valleys draining the volcano during the past 10,000 years. A major volcanic eruption or debris flow could kill thousands of residents and cripple the economy of the Pacific Northwest. Despite the potential for such danger, Mount Rainier has received little study. Most of the geologic work on Mount Rainier was done more than two decades ago. Fundamental topics such as the development, history, and stability of the volcano are poorly understood.

  20. Mounting power cables on SOLEIL

    CERN Multimedia

    Laurent Guiraud

    1999-01-01

    The power couplers are mounted on the SOLEIL cryomodule in a clean room. The cryomodule will allow superconducting technology to be used at SOLEIL, the French national synchrotron facility. This work is carried out as part of a collaboration between CERN and CEA Saclay, the French National Atomic Energy Commission.

  1. MISR radiometric camera-by-camera Cloud Mask V004

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the Radiometric camera-by-camera Cloud Mask dataset. It is used to determine whether a scene is classified as clear or cloudy. A new parameter has...

  2. Analysis of Camera Arrays Applicable to the Internet of Things.

    Science.gov (United States)

    Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing

    2016-03-22

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.

  3. A multiple camera tongue switch for a child with severe spastic quadriplegic cerebral palsy.

    Science.gov (United States)

    Leung, Brian; Chau, Tom

    2010-01-01

    The present study proposed a video-based access technology that facilitated a non-contact tongue protrusion access modality for a 7-year-old boy with severe spastic quadriplegic cerebral palsy (GMFCS level 5). The proposed system featured a centre camera and two peripheral cameras to extend coverage of the frontal face view of this user for longer durations. The child participated in a descriptive case study. The participant underwent 3 months of tongue protrusion training while the multiple camera tongue switch prototype was being prepared. Later, the participant was brought back for five experiment sessions where he worked on a single-switch picture matching activity, using the multiple camera tongue switch prototype in a controlled environment. The multiple camera tongue switch achieved an average sensitivity of 82% and specificity of 80%. In three of the experiment sessions, the peripheral cameras were associated with most of the true positive switch activations. These activations would have been missed by a centre-camera-only setup. The study demonstrated proof-of-concept of a non-contact tongue access modality implemented by a video-based system involving three cameras and colour video processing.

  4. Goniometer to calibrate system cameras or amateur cameras

    Science.gov (United States)

    Hakkarainen, J.

    An accurate and rapid horizontal goniometer was developed to determine the optical properties of film cameras. Radial and decentering distortion, color defects, optical resolution, and small object transmission factors are measured according to light wavelengths and symmetry. The goniometer can be used to calibrate cameras for photogrammetry, to determine the effects of remoteness on image geometry, distortion symmetry, efficiency of lens lighting film systems, to develop quality criteria for lenses, and to test camera lens and camera defects after an incident.

  5. Tracking camera control in endoscopic dacryocystorhinostomy surgery.

    Science.gov (United States)

    Wawrzynski, J R; Smith, P; Tang, L; Hoare, T; Caputo, S; Siddiqui, A A; Tsatsos, M; Saleh, G M

    2015-12-01

    Poor camera control during endoscopic dacryocystorhinostomy (EnDCR) surgery can cause inadequate visualisation of the anatomy and suboptimal surgical outcomes. This study investigates the feasibility of using computer vision tracking in EnDCR surgery as a potential formative feedback tool for the quality of endoscope control. A prospective cohort analysis was undertaken comparing junior versus senior surgeons performing routine EnDCR surgery. Computer vision tracking was applied to endoscopic video footage of the surgery: Total number of movements, camera path length in pixels and surgical time were determined for each procedure. A Mann-Whitney U-test was used to test for a significant difference between juniors and seniors (P theatre. Ten junior surgeons (100 completed procedures). Total number of movements of the endoscope per procedure. Path length of the endoscope per procedure. Twenty videos, 10 from junior surgeons and 10 from senior surgeons were analysed. Feasibility of our tracking system was demonstrated. Mean camera path lengths were significantly different at 119,329px (juniors) versus 43,697px (seniors), P ≪ 0.05. The mean number of movements was significantly different at 9134 (juniors) versus 3690 (seniors), P ≪ 0.05. These quantifiable differences demonstrate construct validity for computer vision endoscope tracking as a measure of surgical experience. Computer vision tracking is a potentially useful structured and objective feedback tool to assist trainees in improving endoscope control. It enables juniors to examine how their pattern of endoscope control differs from that of seniors, focusing in particular on sections where they are most divergent. © 2015 John Wiley & Sons Ltd.

  6. Olympic Coast National Marine Sanctuary - stil120_0602a - Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during September 2006. Video data...

  7. still116_0501n-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  8. still116_0501d-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  9. still116_0501c-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  10. still116_0501s-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  11. still114_0402c-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  12. still115_0403-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  13. still114_0402b-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  14. Neurosurgical Skills Assessment: Measuring Technical Proficiency in Neurosurgery Residents Through Intraoperative Video Evaluations.

    Science.gov (United States)

    Sarkiss, Christopher A; Philemond, Steven; Lee, James; Sobotka, Stanislaw; Holloway, Terrell D; Moore, Maximillian M; Costa, Anthony B; Gordon, Errol L; Bederson, Joshua B

    2016-05-01

    Although technical skills are fundamental in neurosurgery, there is little agreement on how to describe, measure, or compare skills among surgeons. The primary goal of this study was to develop a quantitative grading scale for technical surgical performance that distinguishes operator skill when graded by domain experts (residents, attendings, and nonsurgeons). Scores provided by raters should be highly reliable with respect to scores from other observers. Neurosurgery residents were fitted with a head-mounted video camera while performing craniotomies under attending supervision. Seven videos, 1 from each postgraduate year (PGY) level (1-7), were anonymized and scored by 16 attendings, 8 residents, and 7 nonsurgeons using a grading scale. Seven skills were graded: incision, efficiency of instrument use, cauterization, tissue handling, drilling/craniotomy, confidence, and training level. A strong correlation was found between skills score and PGY year (P Technical skills of neurosurgery residents recorded during craniotomy can be measured with high interrater reliability. Surgeons and nonsurgeons alike readily distinguish different skill levels. This type of assessment could be used to coach residents, to track performance over time, and potentially to compare skill levels. Developing an objective tool to evaluate surgical performance would be useful in several areas of neurosurgery education. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  16. Recording gaze trajectory of wheelchair users by a spherical camera.

    Science.gov (United States)

    Li, Shigang; Fujiura, Tatsuya; Nakanishi, Isao

    2017-07-01

    Wheelchairs are widely used in the facilities of rehabilitation. In this paper, we propose a method of recording the gaze trajectory of wheelchair users by using a spherical camera mounted on the wheelchairs. A spherical camera has a full field of view and can observe the entire surrounding scenes. First, the gaze point of a user sitting on a wheelchair is estimated from the corneal reflection image observed by a wearable eye camera. Then, the gaze point is mapped onto the full-view image captured by the spherical camera via feature matching. Since it is not guaranteed that the gaze point in an eye image is a distinctive feature point, the matching of a gaze point between these two images cannot be carried out directly. To cope with this problem, we use a coarse-to-fine approach, in which, first, distinctive feature points are used to estimate the relative orientation between the eye camera and the spherical camera, and then, the estimated relative orientation matrix is used to determine the location of gaze points. The effectiveness of the proposed method is shown by real-world experimental results.

  17. Design of video interface conversion system based on FPGA

    Science.gov (United States)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  18. Managed Video as a Service for a Video Surveillance Model

    Directory of Open Access Journals (Sweden)

    Dan Benta

    2009-01-01

    Full Text Available The increasing demand for security systems hasresulted in rapid development of video surveillance and videosurveillance has turned into a major area of interest andmanagement challenge. Personal experience in specializedcompanies helped me to adapt demands of users of videosecurity systems to system performance. It is known thatpeople wish to obtain maximum profit with minimum effort,but security is not neglected. Surveillance systems and videomonitoring should provide only necessary information and torecord only when there is activity. Via IP video surveillanceservices provides more safety in this sector, being able torecord information on servers located in other locations thanthe IP cameras. Also, these systems allow real timemonitoring of goods or activities that take place in supervisedperimeters. View live and recording can be done via theInternet from any computer, using a web browser. Access tothe surveillance system is granted after a user and passwordauthentication.

  19. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer.

    Science.gov (United States)

    Shen, Bailey Y; Mukai, Shizuo

    2017-01-01

    Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm × 91mm × 45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  20. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer

    Directory of Open Access Journals (Sweden)

    Bailey Y. Shen

    2017-01-01

    Full Text Available Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm×91mm×45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  1. The NEAT Camera Project

    Science.gov (United States)

    Jr., Ray L. Newburn

    1995-01-01

    The NEAT (Near Earth Asteroid Tracking) camera system consists of a camera head with a 6.3 cm square 4096 x 4096 pixel CCD, fast electronics, and a Sun Sparc 20 data and control computer with dual CPUs, 256 Mbytes of memory, and 36 Gbytes of hard disk. The system was designed for optimum use with an Air Force GEODSS (Ground-based Electro-Optical Deep Space Surveillance) telescope. The GEODSS telescopes have 1 m f/2.15 objectives of the Ritchey-Chretian type, designed originally for satellite tracking. Installation of NEAT began July 25 at the Air Force Facility on Haleakala, a 3000 m peak on Maui in Hawaii.

  2. Estimation of temporal parameters during sprint running using a trunk-mounted inertial measurement unit.

    Science.gov (United States)

    Bergamini, Elena; Picerno, Pietro; Pillet, Hélène; Natta, Françoise; Thoreux, Patricia; Camomilla, Valentina

    2012-04-05

    The purpose of this study was to identify consistent features in the signals supplied by a single inertial measurement unit (IMU), or thereof derived, for the identification of foot-strike and foot-off instants of time and for the estimation of stance and stride duration during the maintenance phase of sprint running. Maximal sprint runs were performed on tartan tracks by five amateur and six elite athletes, and durations derived from the IMU data were validated using force platforms and a high-speed video camera, respectively, for the two groups. The IMU was positioned on the lower back trunk (L1 level) of each athlete. The magnitudes of the acceleration and angular velocity vectors measured by the IMU, as well as their wavelet-mediated first and second derivatives were computed, and features related to foot-strike and foot-off events sought. No consistent features were found on the acceleration signal or on its first and second derivatives. Conversely, the foot-strike and foot-off events could be identified from features exhibited by the second derivative of the angular velocity magnitude. An average absolute difference of 0.005 s was found between IMU and reference estimates, for both stance and stride duration and for both amateur and elite athletes. The 95% limits of agreement of this difference were less than 0.025 s. The results proved that a single, trunk-mounted IMU is suitable to estimate stance and stride duration during sprint running, providing the opportunity to collect information in the field, without constraining or limiting athletes' and coaches' activities. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. The use of head-mounted display eyeglasses for teaching surgical skills: A prospective randomised study.

    Science.gov (United States)

    Peden, Robert G; Mercer, Rachel; Tatham, Andrew J

    2016-10-01

    To investigate whether 'surgeon's eye view' videos provided via head-mounted displays can improve skill acquisition and satisfaction in basic surgical training compared with conventional wet-lab teaching. A prospective randomised study of 14 medical students with no prior suturing experience, randomised to 3 groups: 1) conventional teaching; 2) head-mounted display-assisted teaching and 3) head-mounted display self-learning. All were instructed in interrupted suturing followed by 15 minutes' practice. Head-mounted displays provided a 'surgeon's eye view' video demonstrating the technique, available during practice. Subsequently students undertook a practical assessment, where suturing was videoed and graded by masked assessors using a 10-point surgical skill score (1 = very poor technique, 10 = very good technique). Students completed a questionnaire assessing confidence and satisfaction. Suturing ability after teaching was similar between groups (P = 0.229, Kruskal-Wallis test). Median surgical skill scores were 7.5 (range 6-10), 6 (range 3-8) and 7 (range 1-7) following head-mounted display-assisted teaching, conventional teaching, and head-mounted display self-learning respectively. There was good agreement between graders regarding surgical skill scores (rho.c = 0.599, r = 0.603), and no difference in number of sutures placed between groups (P = 0.120). The head-mounted display-assisted teaching group reported greater enjoyment than those attending conventional teaching (P = 0.033). Head-mounted display self-learning was regarded as least useful (7.4 vs 9.0 for conventional teaching, P = 0.021), but more enjoyable than conventional teaching (9.6 vs 8.0, P = 0.050). Teaching augmented with head-mounted displays was significantly more enjoyable than conventional teaching. Students undertaking self-directed learning using head-mounted displays with pre-recorded videos had comparable skill acquisition to those attending traditional wet

  4. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition...

  5. Video Analytics

    DEFF Research Database (Denmark)

    include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...

  6. Streak camera techniques

    Energy Technology Data Exchange (ETDEWEB)

    Avara, R.

    1977-06-01

    An introduction to streak camera geometry, experimental techniques, and limitations are presented. Equations, graphs and charts are included to provide useful data for optimizing the associated optics to suit each experiment. A simulated analysis is performed on simultaneity and velocity measurements. An error analysis is also performed for these measurements utilizing the Monte Carlo method to simulate the distribution of uncertainties associated with simultaneity-time measurements.

  7. TEM Video Compressive Sensing

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-01

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental conditions

  8. Gamma ray camera

    Science.gov (United States)

    Perez-Mendez, Victor

    1997-01-01

    A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.

  9. The DRAGO gamma camera.

    Science.gov (United States)

    Fiorini, C; Gola, A; Peloso, R; Longoni, A; Lechner, P; Soltau, H; Strüder, L; Ottobrini, L; Martelli, C; Lui, R; Madaschi, L; Belloli, S

    2010-04-01

    In this work, we present the results of the experimental characterization of the DRAGO (DRift detector Array-based Gamma camera for Oncology), a detection system developed for high-spatial resolution gamma-ray imaging. This camera is based on a monolithic array of 77 silicon drift detectors (SDDs), with a total active area of 6.7 cm(2), coupled to a single 5-mm-thick CsI(Tl) scintillator crystal. The use of an array of SDDs provides a high quantum efficiency for the detection of the scintillation light together with a very low electronics noise. A very compact detection module based on the use of integrated readout circuits was developed. The performances achieved in gamma-ray imaging using this camera are reported here. When imaging a 0.2 mm collimated (57)Co source (122 keV) over different points of the active area, a spatial resolution ranging from 0.25 to 0.5 mm was measured. The depth-of-interaction capability of the detector, thanks to the use of a Maximum Likelihood reconstruction algorithm, was also investigated by imaging a collimated beam tilted to an angle of 45 degrees with respect to the scintillator surface. Finally, the imager was characterized with in vivo measurements on mice, in a real preclinical environment.

  10. Passive imaging of wind surface flow using an infrared camera

    Science.gov (United States)

    Hagen, Nathan

    2017-12-01

    We present a method for passive imaging of wind motion against surfaces in a scene using an infrared video camera. Because the method does not require the introduction of contrast agents for visualization, it is possible to obtain real-time surface flow measurements across large areas and in natural outdoor conditions, without prior preparation of surfaces. We show that this method can be used not just for obtaining single snapshot images but also for real-time flow video, and demonstrate that it is possible to measure under a wide range of conditions.

  11. Efficient Stereo Image Geometrical Reconstruction at Arbitrary Camera Settings from a Single Calibration

    OpenAIRE

    Ji, Songbai; Fan, Xiaoyao; Roberts, David W.; Paulsen, Keith D.

    2014-01-01

    Camera calibration is central to obtaining a quantitative image-to-physical-space mapping from stereo images acquired in the operating room (OR). A practical challenge for cameras mounted to the operating microscope is maintenance of image calibration as the surgeon’s field-of-view is repeatedly changed (in terms of zoom and focal settings) throughout a procedure. Here, we present an efficient method for sustaining a quantitative image-to-physical space relationship for arbitrary image acquis...

  12. Bring your own camera to the trap: An inexpensive, versatile, and portable triggering system tested on wild hummingbirds.

    Science.gov (United States)

    Rico-Guevara, Alejandro; Mickley, James

    2017-07-01

    The study of animals in the wild offers opportunities to collect relevant information on their natural behavior and abilities to perform ecologically relevant tasks. However, it also poses challenges such as accounting for observer effects, human sensory limitations, and the time intensiveness of this type of research. To meet these challenges, field biologists have deployed camera traps to remotely record animal behavior in the wild. Despite their ubiquity in research, many commercial camera traps have limitations, and the species and behavior of interest may present unique challenges. For example, no camera traps support high-speed video recording. We present a new and inexpensive camera trap system that increases versatility by separating the camera from the triggering mechanism. Our system design can pair with virtually any camera and allows for independent positioning of a variety of sensors, all while being low-cost, lightweight, weatherproof, and energy efficient. By using our specialized trigger and customized sensor configurations, many limitations of commercial camera traps can be overcome. We use this system to study hummingbird feeding behavior using high-speed video cameras to capture fast movements and multiple sensors placed away from the camera to detect small body sizes. While designed for hummingbirds, our application can be extended to any system where specialized camera or sensor features are required, or commercial camera traps are cost-prohibitive, allowing camera trap use in more research avenues and by more researchers.

  13. Tension pneumocephalus: Mount Fuji sign

    Directory of Open Access Journals (Sweden)

    Pulastya Sanyal

    2015-01-01

    Full Text Available A 13-year-old male was operated for a space occupying lesion in the brain. A noncontrast computed tomography scan done in the late postoperative period showed massive subdural air collection causing compression of bilateral frontal lobes with widening of interhemispheric fissure and the frontal lobes acquiring a peak like configuration - causing tension pneumocephalus-"Mount Fuji sign." Tension pneumocephalus occurs when air enters the extradural or intradural spaces in sufficient volume to exert a mass or pressure effect on the brain, leading to brain herniation. Tension pneumocephalus is a surgical emergency, which needs immediate intervention in the form of decompression of the cranial cavity by a burr hole or needle aspiration. The Mount Fuji sign differentiates tension pneumocephalus from pneumocephalus.

  14. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  15. CHAMP (Camera, Handlens, and Microscope Probe)

    Science.gov (United States)

    Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.

    2005-01-01

    CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.

  16. CHAMP - Camera, Handlens, and Microscope Probe

    Science.gov (United States)

    Mungas, G. S.; Beegle, L. W.; Boynton, J.; Sepulveda, C. A.; Balzer, M. A.; Sobel, H. R.; Fisher, T. A.; Deans, M.; Lee, P.

    2005-01-01

    CHAMP (Camera, Handlens And Microscope Probe) is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As an arm-mounted imager, CHAMP supports stereo-imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision range-finding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. Currently designed with a filter wheel with 4 different filters, so that color and black and white images can be obtained over the entire Field-of-View, future designs will increase the number of filter positions to include 8 different filters. Finally, CHAMP incorporates controlled white and UV illumination so that images can be obtained regardless of sun position, and any potential fluorescent species can be identified so the most astrobiologically interesting samples can be identified.

  17. SHIP CLASSIFICATION FROM MULTISPECTRAL VIDEOS

    Directory of Open Access Journals (Sweden)

    Frederique Robert-Inacio

    2012-05-01

    Full Text Available Surveillance of a seaport can be achieved by different means: radar, sonar, cameras, radio communications and so on. Such a surveillance aims, on the one hand, to manage cargo and tanker traffic, and, on the other hand, to prevent terrorist attacks in sensitive areas. In this paper an application to video-surveillance of a seaport entrance is presented, and more particularly, the different steps enabling to classify mobile shapes. This classification is based on a parameter measuring the similarity degree between the shape under study and a set of reference shapes. The classification result describes the considered mobile in terms of shape and speed.

  18. From different angles: Exploring and applying the design potential of video

    OpenAIRE

    Pasman, G.J.

    2012-01-01

    Recent developments in both hardware and software have brought video within the scope of design students as a new visual design tool. Being more and more equipped with cameras, for example in their smartphones, and video editing programs on their computers, they are increasing using video to record their research activities or present their design ideas. In design education, however, the full potential of video as a rich and contextual design medium is yet to be explored and developed. This p...

  19. MOUNT HENRY ROADLESS AREA, MONTANA.

    Science.gov (United States)

    Van Loenen, Richard E.; Conyac, Martin D.

    1984-01-01

    A mineral survey of the Mount Henry Roadless Area, Lincoln County, Montana, was conducted. A small area located along the southwest boundary was determined to have a probable mineral-resource potential for low-grade deposits of stratabound copper and silver. There is little possibility for locatable mineral, coal, oil, gas, and geothermal resources in the remainder of the area. There are no mines, prospects, or records of mineral production within the roadless area.

  20. Defect visualization in FRP-bonded concrete by using high speed camera and motion magnification technique

    Science.gov (United States)

    Qiu, Qiwen; Lau, Denvid

    2017-04-01

    High speed camera has the unique capacity of recording fast-moving objects. By using the video processing technique (e.g. motion magnification), the small motions recorded by the high speed camera can be visualized. Combined use of video camera and motion magnification technique is strongly encouraged to inspect the structures from a distant scene of interest, due to the commonplace availability, operational convenience, and cost-efficiency. This paper presents a non-contact method to evaluate the defect in FRP-bonded concrete structural element based on the surface motion analysis of high speed video. In this study, an instant air pressure is used to initiate the vibration of FRP-bonded concrete and cause the distinct vibration for the interfacial defects. The entire structural surface under the air pressure is recorded by a high-speed camera and the surface motion in video is amplified by motion magnification processing technique. The experimental results demonstrate that motion in the interfacial defect region can be visualized in the high-speed video with motion magnification. This validates the effectiveness of the new NDT method for defect detection in the whole composites structural member. The use of high-speed camera and motion magnification technique has the advantages of remote detection, efficient inspection, and sensitive measurement, which would be beneficial to structural health monitoring.

  1. Multiple-camera tracking: UK government requirements

    Science.gov (United States)

    Hosmer, Paul

    2007-10-01

    The Imagery Library for Intelligent Detection Systems (i-LIDS) is the UK government's new standard for Video Based Detection Systems (VBDS). The standard was launched in November 2006 and evaluations against it began in July 2007. With the first four i-LIDS scenarios completed, the Home Office Scientific development Branch (HOSDB) are looking toward the future of intelligent vision in the security surveillance market by adding a fifth scenario to the standard. The fifth i-LIDS scenario will concentrate on the development, testing and evaluation of systems for the tracking of people across multiple cameras. HOSDB and the Centre for the Protection of National Infrastructure (CPNI) identified a requirement to track targets across a network of CCTV cameras using both live and post event imagery. The Detection and Vision Systems group at HOSDB were asked to determine the current state of the market and develop an in-depth Operational Requirement (OR) based on government end user requirements. Using this OR the i-LIDS team will develop a full i-LIDS scenario to aid the machine vision community in its development of multi-camera tracking systems. By defining a requirement for multi-camera tracking and building this into the i-LIDS standard the UK government will provide a widely available tool that developers can use to help them turn theory and conceptual demonstrators into front line application. This paper will briefly describe the i-LIDS project and then detail the work conducted in building the new tracking aspect of the standard.

  2. Physics Girl: Where Education meets Cat Videos

    Science.gov (United States)

    Cowern, Dianna

    YouTube is usually considered an entertainment medium to watch cats, gaming, and music videos. But educational channels have been gaining momentum on the platform, some garnering millions of subscribers and billions of views. The Physics Girl YouTube channel is an educational series with PBS Digital Studios created by Dianna Cowern. Using Physics Girl as an example, this talk will examine what it takes to start a short-form educational video series, including logistics and resources. One benefit of video is that every failure is documented on camera and can, and will, be used in this talk as a learning tool. We will look at the channels demographical reach, discuss best practices for effective physics outreach, and survey how online media and technology can facilitate good and bad learning. The aim of this talk is to show how videos are a unique way to share science and enrich the learning experience, in and out of a classroom.

  3. Implementation of multistandard video signals integrator

    Science.gov (United States)

    Zabołotny, Wojciech M.; Pastuszak, Grzegorz; Sokół, Grzegorz; Borowik, Grzegorz; GÄ ska, Michał; Kasprowicz, Grzegorz H.; Poźniak, Krzysztof T.; Abramowski, Andrzej; Buchowicz, Andrzej; Trochimiuk, Maciej; Frasunek, Przemysław; Jurkiewicz, Rafał; Nalbach-Moszynska, Małgorzata; Wawrzusiak, Radosław; Bukowiecka, Danuta; Tyburska, Agata; Struniawski, Jarosław; Jastrzebski, Paweł; Jewartowski, BłaŻej; Brawata, Sebastian; Bubak, Iwona; Gloza, Małgorzata

    2017-08-01

    The paper describes the prototype implemetantion of the Video Signals Integrator (VSI). The function of the system is to integrate video signals from many sources. The VSI is a complex hybrid system consisting of hardware, firmware and software components. Its creation requires joint effort of experts from different areas. The VSI capture device is a portable hardware device responsible for capturing of video signals from different different sources and in various formats, and for transmitting them to the server. The NVR server aggregates video and control streams coming from different sources and multiplexes them into logical channels with each channel representing a single source. From there each channel can be distributed further to the end clients (consoles) for live display via a number of RTSP servers. The end client can, at the same time, inject control messages into a given channel to control movement of a CCTV camera.

  4. Temporal compressive imaging for video

    Science.gov (United States)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  5. Distributed Video Coding for Multiview and Video-plus-depth Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo

    The interest in Distributed Video Coding (DVC) systems has grown considerably in the academic world in recent years. With DVC the correlation between frames is exploited at the decoder (joint decoding). The encoder codes the frame independently, performing relatively simple operations. Therefore......, with DVC the complexity is shifted from encoder to decoder, making the coding architecture a viable solution for encoders with limited resources. DVC may empower new applications which can benefit from this reversed coding architecture. Multiview Distributed Video Coding (M-DVC) is the application...... of the DVC principles to camera networks. Thanks to its reversed coding paradigm M-DVC enables the exploitation of inter-camera redundancy without inter-camera communication, because the frames are encoded independently. One of the key elements in DVC is the Side Information (SI) which is an estimation...

  6. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    Science.gov (United States)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  7. Mounting support for a photovoltaic module

    Science.gov (United States)

    Brandt, Gregory Michael; Barsun, Stephan K.; Coleman, Nathaniel T.; Zhou, Yin

    2013-03-26

    A mounting support for a photovoltaic module is described. The mounting support includes a foundation having an integrated wire-way ledge portion. A photovoltaic module support mechanism is coupled with the foundation.

  8. Distributed embedded smart cameras architectures, design and applications

    CERN Document Server

    Velipasalar, Senem

    2014-01-01

    This publication addresses distributed embedded smart camerascameras that perform onboard analysis and collaborate with other cameras. This book provides the material required to better understand the architectural design challenges of embedded smart camera systems, the hardware/software ecosystem, the design approach for, and applications of distributed smart cameras together with the state-of-the-art algorithms. The authors concentrate on the architecture, hardware/software design, realization of smart camera networks from applications to architectures, in particular in the embedded and mobile domains. •                    Examines energy issues related to wireless communication such as decreasing energy consumption to increase battery-life •                    Discusses processing large volumes of video data on an embedded environment in real-time •                    Covers design of realistic applications of distributed and embedded smart...

  9. Improving the Quality of Color Colonoscopy Videos

    Directory of Open Access Journals (Sweden)

    Dahyot Rozenn

    2008-01-01

    Full Text Available Abstract Colonoscopy is currently one of the best methods to detect colorectal cancer. Nowadays, one of the widely used colonoscopes has a monochrome chipset recording successively at 60 Hz and components merged into one color video stream. Misalignments of the channels occur each time the camera moves, and this artefact impedes both online visual inspection by doctors and offline computer analysis of the image data. We propose to restore this artefact by first equalizing the color channels and then performing a robust camera motion estimation and compensation.

  10. Three-dimensional camera

    Science.gov (United States)

    Bothe, Thorsten; Gesierich, Achim; Legarda-Saenz, Ricardo; Jueptner, Werner P. O.

    2003-05-01

    Industrial- and multimedia applications need cost effective, compact and flexible 3D profiling instruments. In the talk we will show the principle of, applications for and results from a new miniaturized 3-D profiling system for macroscopic scenes. The system uses a compact housing and is usable like a camera with minimum stabilization like a tripod. The system is based on common fringe projection technique. Camera and projector are assembled with parallel optical axes having coplanar projection and imaging plane. Their axes distance is comparable to the human eyes distance altogether giving a complete system of 21x20x11 cm size and allowing to measure high gradient objects like the interior of tubes. The fringe projector uses a LCD which enables fast and flexible pattern projection. Camera and projector have a short focal length and a high system aperture as well as a large depth of focus. Thus, objects can be measured from a shorter distance compared to common systems (e.g. 1 m sized objects in 80 cm distance). Actually, objects with diameters up to 4 m can be profiled because the set-up allows working with completely opened aperture combined with bright lamps giving a big amount of available light and a high Signal to Noise Ratio. Normally a small basis has the disadvantage of reduced sensitivity. We investigated in methods to compensate the reduced sensitivity via setup and enhanced evaluation methods. For measurement we use synthetic wavelengths. The developed algorithms are completely adaptable concerning the users needs of speed and accuracy. The 3D camera is built from low cost components, robust, nearly handheld and delivers insights also into difficult technical objects like tubes and inside volumes. Besides the realized high resolution phase measurement the system calibration is an important task for usability. While calibrating with common photogrammetric models (which are typically used for actual fringe projection systems) problems were found that

  11. Digital camera in ophthalmology

    Directory of Open Access Journals (Sweden)

    Ashish Mitra

    2015-01-01

    Full Text Available Ophthalmology is an expensive field and imaging is an indispensable modality in ophthalmology; and in developing countries including India, it is not possible for every ophthalmologist to afford slit-lamp photography unit. We here present our experience of slit-lamp photography using digital camera. Good quality pictures of anterior and posterior segment disorders were captured using readily available devices. It can be a used as a good teaching tool for residents learning ophthalmology and can also be a method to document lesions which at many times is necessary for medicolegal purposes. It's a technique which is simple, inexpensive, and has a short learning curve.

  12. A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera's performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  13. REAL TIME SPEED ESTIMATION FROM MONOCULAR VIDEO

    Directory of Open Access Journals (Sweden)

    M. S. Temiz

    2012-07-01

    Full Text Available In this paper, detailed studies have been performed for developing a real time system to be used for surveillance of the traffic flow by using monocular video cameras to find speeds of the vehicles for secure travelling are presented. We assume that the studied road segment is planar and straight, the camera is tilted downward a bridge and the length of one line segment in the image is known. In order to estimate the speed of a moving vehicle from a video camera, rectification of video images is performed to eliminate the perspective effects and then the interest region namely the ROI is determined for tracking the vehicles. Velocity vectors of a sufficient number of reference points are identified on the image of the vehicle from each video frame. For this purpose sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in the image space are transformed to the object space to find the absolute values of these magnitudes. The accuracy of the estimated speed is approximately ±1 – 2 km/h. In order to solve the real time speed estimation problem, the authors have written a software system in C++ programming language. This software system has been used for all of the computations and test applications.

  14. Mars Cameras Make Panoramic Photography a Snap

    Science.gov (United States)

    2008-01-01

    If you wish to explore a Martian landscape without leaving your armchair, a few simple clicks around the NASA Web site will lead you to panoramic photographs taken from the Mars Exploration Rovers, Spirit and Opportunity. Many of the technologies that enable this spectacular Mars photography have also inspired advancements in photography here on Earth, including the panoramic camera (Pancam) and its housing assembly, designed by the Jet Propulsion Laboratory and Cornell University for the Mars missions. Mounted atop each rover, the Pancam mast assembly (PMA) can tilt a full 180 degrees and swivel 360 degrees, allowing for a complete, highly detailed view of the Martian landscape. The rover Pancams take small, 1 megapixel (1 million pixel) digital photographs, which are stitched together into large panoramas that sometimes measure 4 by 24 megapixels. The Pancam software performs some image correction and stitching after the photographs are transmitted back to Earth. Different lens filters and a spectrometer also assist scientists in their analyses of infrared radiation from the objects in the photographs. These photographs from Mars spurred developers to begin thinking in terms of larger and higher quality images: super-sized digital pictures, or gigapixels, which are images composed of 1 billion or more pixels. Gigapixel images are more than 200 times the size captured by today s standard 4 megapixel digital camera. Although originally created for the Mars missions, the detail provided by these large photographs allows for many purposes, not all of which are limited to extraterrestrial photography.

  15. Human recognition at a distance in video

    CERN Document Server

    Bhanu, Bir

    2010-01-01

    Most biometric systems employed for human recognition require physical contact with, or close proximity to, a cooperative subject. Far more challenging is the ability to reliably recognize individuals at a distance, when viewed from an arbitrary angle under real-world environmental conditions. Gait and face data are the two biometrics that can be most easily captured from a distance using a video camera. This comprehensive and logically organized text/reference addresses the fundamental problems associated with gait and face-based human recognition, from color and infrared video data that are

  16. Compact 3D camera

    Science.gov (United States)

    Bothe, Thorsten; Osten, Wolfgang; Gesierich, Achim; Jueptner, Werner P. O.

    2002-06-01

    A new, miniaturized fringe projection system is presented which has a size and handling that approximates to common 2D cameras. The system is based on the fringe projection technique. A miniaturized fringe projector and camera are assembled into a housing of 21x20x11 cm size with a triangulation basis of 10 cm. The advantage of the small triangulation basis is the possibility to measure difficult objects with high gradients. Normally a small basis has the disadvantage of reduced sensitivity. We investigated in methods to compensate the reduced sensitivity via setup and enhanced evaluation methods. Special hardware issues are a high quality, bright light source (and components to handle the high luminous flux) as well as adapted optics to gain a large aperture angle and a focus scan unit to increase the usable measurement volume. Adaptable synthetic wavelengths and integration times were used to increase the measurement quality and allow robust measurements that are adaptable to the desired speed and accuracy. Algorithms were developed to generate automatic focus positions to completely cover extended measurement volumes. Principles, setup, measurement examples and applications are shown.

  17. Impact of New Camera Technologies on Discoveries in Cell Biology.

    Science.gov (United States)

    Stuurman, Nico; Vale, Ronald D

    2016-08-01

    New technologies can make previously invisible phenomena visible. Nowhere is this more obvious than in the field of light microscopy. Beginning with the observation of "animalcules" by Antonie van Leeuwenhoek, when he figured out how to achieve high magnification by shaping lenses, microscopy has advanced to this day by a continued march of discoveries driven by technical innovations. Recent advances in single-molecule-based technologies have achieved unprecedented resolution, and were the basis of the Nobel prize in Chemistry in 2014. In this article, we focus on developments in camera technologies and associated image processing that have been a major driver of technical innovations in light microscopy. We describe five types of developments in camera technology: video-based analog contrast enhancement, charge-coupled devices (CCDs), intensified sensors, electron multiplying gain, and scientific complementary metal-oxide-semiconductor cameras, which, together, have had major impacts in light microscopy. © 2016 Marine Biological Laboratory.

  18. Classifying smoke in laparoscopic videos using SVM

    Directory of Open Access Journals (Sweden)

    Alshirbaji Tamer Abdulbaki

    2017-09-01

    Full Text Available Smoke in laparoscopic videos usually appears due to the use of electrocautery when cutting or coagulating tissues. Therefore, detecting smoke can be used for event-based annotation in laparoscopic surgeries by retrieving the events associated with the electrocauterization. Furthermore, smoke detection can also be used for automatic smoke removal. However, detecting smoke in laparoscopic video is a challenge because of the changeability of smoke patterns, the moving camera and the different lighting conditions. In this paper, we present a video-based smoke detection algorithm to detect smoke of different densities such as fog, low and high density in laparoscopic videos. The proposed method depends on extracting various visual features from the laparoscopic images and providing them to support vector machine (SVM classifier. Features are based on motion, colour and texture patterns of the smoke. We validated our algorithm using experimental evaluation on four laparoscopic cholecystectomy videos. These four videos were manually annotated by defining every frame as smoke or non-smoke frame. The algorithm was applied to the videos by using different feature combinations for classification. Experimental results show that the combination of all proposed features gives the best classification performance. The overall accuracy (i.e. correctly classified frames is around 84%, with the sensitivity (i.e. correctly detected smoke frames and the specificity (i.e. correctly detected non-smoke frames are 89% and 80%, respectively.

  19. 13 point video tape quality guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Gaunt, R.

    1997-05-01

    Until high definition television (ATV) arrives, in the U.S. we must still contend with the National Television Systems Committee (NTSC) video standard (or PAL or SECAM-depending on your country). NTSC, a 40-year old standard designed for transmission of color video camera images over a small bandwidth, is not well suited for the sharp, full-color images that todays computers are capable of producing. PAL and SECAM also suffers from many of NTSC`s problems, but to varying degrees. Video professionals, when working with computer graphic (CG) images, use two monitors: a computer monitor for producing CGs and an NTSC monitor to view how a CG will look on video. More often than not, the NTSC image will differ significantly from the CG image, and outputting it to NTSC as an artist works enables the him or her to see the images as others will see it. Below are thirteen guidelines designed to increase the quality of computer graphics recorded onto video tape. Viewing your work in NTSC and attempting to follow the below tips will enable you to create higher quality videos. No video is perfect, so don`t expect to abide by every guideline every time.

  20. MOUNT MORIAH ROADLESS AREA, NEVADA.

    Science.gov (United States)

    Carlson, Robert R.; Wood, Robert H.

    1984-01-01

    A mineral survey identified the northeastern part of the Mount Moriah Roadless Area in extreme east-central Nevada as an area of probable potential for the occurrence of small, isolated deposits containing lead and zinc. Many active quarries in a unique high-quality decorative building stone occur in the area and have substantiated mineral-resource potential. Further studies in the roadless area might include detailed mapping of exposed Prospect Mountain Quartzite building stone units and notation of their suitability for quarrying. More detailed geochemical studies in the area of probable base-metal resource potential might include additional stream-sediment sampling and sampling along fault zones.

  1. Making Sure What You See Is What You Get: Digital Video Technology and the Preparation of Teachers of Elementary Science

    Science.gov (United States)

    Bueno de Mesquita, Paul; Dean, Ross F.; Young, Betty J.

    2010-01-01

    Advances in digital video technology create opportunities for more detailed qualitative analyses of actual teaching practice in science and other subject areas. User-friendly digital cameras and highly developed, flexible video-analysis software programs have made the tasks of video capture, editing, transcription, and subsequent data analysis…

  2. Choreographing the Frame: A Critical Investigation into How Dance for the Camera Extends the Conceptual and Artistic Boundaries of Dance

    Science.gov (United States)

    Preston, Hilary

    2006-01-01

    This essay investigates the collaboration between dance and choreographic practice and film/video medium in a contemporary context. By looking specifically at dance made for the camera and the proliferation of dance-film/video, critical issues will be explored that have surfaced in response to this burgeoning form. Presenting a view of avant-garde…

  3. Feasibility Study of Utilization of Action Camera, GoPro Hero 4, Google Glass, and Panasonic HX-A100 in Spine Surgery.

    Science.gov (United States)

    Lee, Chang Kyu; Kim, Youngjun; Lee, Nam; Kim, Byeongwoo; Kim, Doyoung; Yi, Seong

    2017-02-15

    Study for feasibility of commercially available action cameras in recording video of spine. Recent innovation of the wearable action camera with high-definition video recording enables surgeons to use camera in the operation at ease without high costs. The purpose of this study is to compare the feasibility, safety, and efficacy of commercially available action cameras in recording video of spine surgery. There are early reports of medical professionals using Google Glass throughout the hospital, Panasonic HX-A100 action camera, and GoPro. This study is the first report for spine surgery. Three commercially available cameras were tested: GoPro Hero 4 Silver, Google Glass, and Panasonic HX-A100 action camera. Typical spine surgery was selected for video recording; posterior lumbar laminectomy and fusion. Three cameras were used by one surgeon and video was recorded throughout the operation. The comparison was made on the perspective of human factor, specification, and video quality. The most convenient and lightweight device for wearing and holding throughout the long operation time was Google Glass. The image quality; all devices except Google Glass supported HD format and GoPro has unique 2.7K or 4K resolution. Quality of video resolution was best in GoPro. Field of view, GoPro can adjust point of interest, field of view according to the surgery. Narrow FOV option was the best for recording in GoPro to share the video clip. Google Glass has potentials by using application programs. Connectivity such as Wi-Fi and Bluetooth enables video streaming for audience, but only Google Glass has two-way communication feature in device. Action cameras have the potential to improve patient safety, operator comfort, and procedure efficiency in the field of spinal surgery and broadcasting a surgery with development of the device and applied program in the future. N/A.

  4. MR360: Mixed Reality Rendering for 360° Panoramic Videos.

    Science.gov (United States)

    Rhee, Taehyun; Petikam, Lohit; Allen, Benjamin; Chalmers, Andrew

    2017-04-01

    This paper presents a novel immersive system called MR360 that provides interactive mixed reality (MR) experiences using a conventional low dynamic range (LDR) 360° panoramic video (360-video) shown in head mounted displays (HMDs). MR360 seamlessly composites 3D virtual objects into a live 360-video using the input panoramic video as the lighting source to illuminate the virtual objects. Image based lighting (IBL) is perceptually optimized to provide fast and believable results using the LDR 360-video as the lighting source. Regions of most salient lights in the input panoramic video are detected to optimize the number of lights used to cast perceptible shadows. Then, the areas of the detected lights adjust the penumbra of the shadow to provide realistic soft shadows. Finally, our real-time differential rendering synthesizes illumination of the virtual 3D objects into the 360-video. MR360 provides the illusion of interacting with objects in a video, which are actually 3D virtual objects seamlessly composited into the background of the 360-video. MR360 was implemented in a commercial game engine and tested using various 360-videos. Since our MR360 pipeline does not require any pre-computation, it can synthesize an interactive MR scene using a live 360-video stream while providing realistic high performance rendering suitable for HMDs.

  5. ACCURACY EVALUATION OF STEREO CAMERA SYSTEMS WITH GENERIC CAMERA MODELS

    Directory of Open Access Journals (Sweden)

    D. Rueß

    2012-07-01

    Full Text Available In the last decades the consumer and industrial market for non-projective cameras has been growing notably. This has led to the development of camera description models other than the pinhole model and their employment in mostly homogeneous camera systems. Heterogeneous camera systems (for instance, combine Fisheye and Catadioptric cameras can also be easily thought of for real applications. However, it has not been quite clear, how accurate stereo vision with these cameras and models can be. In this paper, different accuracy aspects are addressed by analytical inspection, numerical simulation as well as real image data evaluation. This analysis is generic, for any camera projection model, although only polynomial and rational projection models are used for distortion free, Catadioptric and Fisheye lenses. Note that this is different to polynomial and rational radial distortion models which have been addressed extensively in literature. For single camera analysis it turns out that point features towards the image sensor borders are significantly more accurate than in center regions of the sensor. For heterogeneous two camera systems it turns out, that reconstruction accuracy decreases significantly towards image borders as different projective distortions occur.

  6. Gyroscope and visual fusion solution for digital video stabilization

    Science.gov (United States)

    Wei, Shanshan; He, Zhiqiang; Xie, Wei

    2016-09-01

    A gyroscope and visual fusion solution for digital video stabilization (DVS) is presented. The solution classifies DVS-related motions into three types: the object motion (OM) in the world space, the camera motion in the camera space (CS), and the pixel motion in the image space (IS). The camera rotation is estimated by gyroscope and smoothed in the CS, while the camera translation is compounded with the OM and smoothed together in the IS. The main contributions of this paper lie in two aspects: (1) propose an inertial and visual fusion method to stabilize both rotational and translational jitters and (2) the fusion method is simple and fast in computation and can be suitable for smart terminals. Experimental results show that the proposed solution performs well in video stabilization.

  7. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition

    Directory of Open Access Journals (Sweden)

    Sumeet Khanduja

    2018-01-01

    Full Text Available Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a anterior segment surgery, (b surgery under direct viewing system, and (c surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.

  8. Airborne imaging for heritage documentation using the Fotokite tethered flying camera

    Science.gov (United States)

    Verhoeven, Geert; Lupashin, Sergei; Briese, Christian; Doneus, Michael

    2014-05-01

    Since the beginning of aerial photography, researchers used all kinds of devices (from pigeons, kites, poles, and balloons to rockets) to take still cameras aloft and remotely gather aerial imagery. To date, many of these unmanned devices are still used for what has been referred to as Low-Altitude Aerial Photography or LAAP. In addition to these more traditional camera platforms, radio-controlled (multi-)copter platforms have recently added a new aspect to LAAP. Although model airplanes have been around for several decades, the decreasing cost, increasing functionality and stability of ready-to-fly multi-copter systems has proliferated their use among non-hobbyists. As such, they became a very popular tool for aerial imaging. The overwhelming amount of currently available brands and types (heli-, dual-, tri-, quad-, hexa-, octo-, dodeca-, deca-hexa and deca-octocopters), together with the wide variety of navigation options (e.g. altitude and position hold, waypoint flight) and camera mounts indicate that these platforms are here to stay for some time. Given the multitude of still camera types and the image quality they are currently capable of, endless combinations of low- and high-cost LAAP solutions are available. In addition, LAAP allows for the exploitation of new imaging techniques, as it is often only a matter of lifting the appropriate device (e.g. video cameras, thermal frame imagers, hyperspectral line sensors). Archaeologists were among the first to adopt this technology, as it provided them with a means to easily acquire essential data from a unique point of view, whether for simple illustration purposes of standing historic structures or to compute three-dimensional (3D) models and orthophotographs from excavation areas. However, even very cheap multi-copters models require certain skills to pilot them safely. Additionally, malfunction or overconfidence might lift these devices to altitudes where they can interfere with manned aircrafts. As such, the

  9. Establishing the reliability of rhesus macaque social network assessment from video observations.

    Science.gov (United States)

    Feczko, Eric; Mitchell, Thomas A J; Walum, Hasse; Brooks, Jenna M; Heitz, Thomas R; Young, Larry J; Parr, Lisa A

    2015-09-01

    Understanding the properties of a social environment is important for understanding the dynamics of social relationships. Understanding such dynamics is relevant for multiple fields, ranging from animal behaviour to social and cognitive neuroscience. To quantify social environment properties, recent studies have incorporated social network analysis. Social network analysis quantifies both the global and local properties of a social environment, such as social network efficiency and the roles played by specific individuals, respectively. Despite the plethora of studies incorporating social network analysis, methods to determine the amount of data necessary to derive reliable social networks are still being developed. Determining the amount of data necessary for a reliable network is critical for measuring changes in the social environment, for example following an experimental manipulation, and therefore may be critical for using social network analysis to statistically assess social behaviour. In this paper, we extend methods for measuring error in acquired data and for determining the amount of data necessary to generate reliable social networks. We derived social networks from a group of 10 male rhesus macaques, Macaca mulatta, for three behaviours: spatial proximity, grooming and mounting. Behaviours were coded using a video observation technique, where video cameras recorded the compound where the 10 macaques resided. We collected, coded and used 10 h of video data to construct these networks. Using the methods described here, we found in our data that 1 h of spatial proximity observations produced reliable social networks. However, this may not be true for other studies due to differences in data acquisition. Our results have broad implications for measuring and predicting the amount of error in any social network, regardless of species.

  10. Establishing the reliability of rhesus macaque social network assessment from video observations

    Science.gov (United States)

    Feczko, Eric; Mitchell, Thomas A. J.; Walum, Hasse; Brooks, Jenna M.; Heitz, Thomas R.; Young, Larry J.; Parr, Lisa A.

    2015-01-01

    Understanding the properties of a social environment is important for understanding the dynamics of social relationships. Understanding such dynamics is relevant for multiple fields, ranging from animal behaviour to social and cognitive neuroscience. To quantify social environment properties, recent studies have incorporated social network analysis. Social network analysis quantifies both the global and local properties of a social environment, such as social network efficiency and the roles played by specific individuals, respectively. Despite the plethora of studies incorporating social network analysis, methods to determine the amount of data necessary to derive reliable social networks are still being developed. Determining the amount of data necessary for a reliable network is critical for measuring changes in the social environment, for example following an experimental manipulation, and therefore may be critical for using social network analysis to statistically assess social behaviour. In this paper, we extend methods for measuring error in acquired data and for determining the amount of data necessary to generate reliable social networks. We derived social networks from a group of 10 male rhesus macaques, Macaca mulatta, for three behaviours: spatial proximity, grooming and mounting. Behaviours were coded using a video observation technique, where video cameras recorded the compound where the 10 macaques resided. We collected, coded and used 10 h of video data to construct these networks. Using the methods described here, we found in our data that 1 h of spatial proximity observations produced reliable social networks. However, this may not be true for other studies due to differences in data acquisition. Our results have broad implications for measuring and predicting the amount of error in any social network, regardless of species. PMID:26392632

  11. Disembodied perspective: third-person images in GoPro videos

    National Research Council Canada - National Science Library

    Bédard, Philippe

    2015-01-01

    A technical analysis of GoPro videos, focusing on the production of a third-person perspective created when the camera is turned back on the user, and the sense of disorientation that results for the spectator...

  12. An Automatic Video Meteor Observation Using UFO Capture at the Showa Station

    Science.gov (United States)

    Fujiwara, Y.; Nakamura, T.; Ejiri, M.; Suzuki, H.

    2012-05-01

    The goal of our study is to clarify meteor activities in the southern hemi-sphere by continuous optical observations with video cameras with automatic meteor detection and recording at Syowa station, Antarctica.

  13. GPM GROUND VALIDATION TWO-DIMENSIONAL VIDEO DISDROMETER (2DVD) NSSTC V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The Two-dimensional Video Disdrometer (2DVD) uses two high speed line scan cameras which provide continuous measurements of size distribution, shape and fall...

  14. Photorealistic image synthesis and camera validation from 2D images

    Science.gov (United States)

    Santos Ferrer, Juan C.; González Chévere, David; Manian, Vidya

    2014-06-01

    This paper presents a new 3D scene reconstruction technique using the Unity 3D game engine. The method presented here allow us to reconstruct the shape of simple objects and more complex ones from multiple 2D images, including infrared and digital images from indoor scenes and only digital images from outdoor scenes and then add the reconstructed object to the simulated scene created in Unity 3D, these scenes are then validated with real world scenes. The method used different cameras settings and explores different properties in the reconstructions of the scenes including light, color, texture, shapes and different views. To achieve the highest possible resolution, it was necessary the extraction of partial textures from visible surfaces. To recover the 3D shapes and the depth of simple objects that can be represented by the geometric bodies, there geometric characteristics were used. To estimate the depth of more complex objects the triangulation method was used, for this the intrinsic and extrinsic parameters were calculated using geometric camera calibration. To implement the methods mentioned above the Matlab tool was used. The technique presented here also let's us to simulate small simple videos, by reconstructing a sequence of multiple scenes of the video separated by small margins of time. To measure the quality of the reconstructed images and video scenes the Fast Low Band Model (FLBM) metric from the Video Quality Measurement (VQM) software was used. Low bandwidth perception based features include edges and motion.

  15. Virtual displays for 360-degree video

    Science.gov (United States)

    Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.

    2012-03-01

    In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.

  16. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    Science.gov (United States)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between

  17. STRUCTURE-FROM-MOTION FOR CALIBRATION OF A VEHICLE CAMERA SYSTEM WITH NON-OVERLAPPING FIELDS-OF-VIEW IN AN URBAN ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    A. Hanel

    2017-05-01

    Full Text Available Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle

  18. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  19. Playing with the Camera - Creating with Each Other

    DEFF Research Database (Denmark)

    Vestergaard, Vitus

    2015-01-01

    Many contemporary museums try to involve users as active participants in a range of new ways. One way to engage young people with visual culture is through exhibits where users produce their own videos. Since museum experiences are largely social in nature and based on the group as a social unit......, it is imperative to investigate how museum users in a group create videos and engage with each other and the exhibits. Based on research on young users creating videos in the Media Mixer, this article explores what happens during the creative process in front of a camera. Drawing upon theories of museology, media......, learning and creativity, the article discusses how to operationalize and make sense of seemingly chaotic or banal production processes in a museum....

  20. Three-dimensional video presentation of microsurgery by the cross-eyed viewing method using a high-definition video system.

    Science.gov (United States)

    Terakawa, Yuzo; Ishibashi, Kenichi; Goto, Takeo; Ohata, Kenji

    2011-01-01

    Three-dimensional (3-D) video recording of microsurgery is a more promising tool for presentation and education of microsurgery than conventional two-dimensional video systems, but has not been widely adopted partly because 3-D image processing of previous 3-D video systems is complicated and observers without optical devices cannot visualize the 3-D image. A new technical development for 3-D video presentation of microsurgery is described. Microsurgery is recorded with a microscope equipped with a single high-definition (HD) video camera. This 3-D video system records the right- and left-eye views of the microscope simultaneously as single HD data with the use of a 3-D camera adapter: the right- and left-eye views of the microscope are displayed separately on the right and left sides, respectively. The operation video is then edited with video editing software so that the right-eye view is displayed on the left side and left-eye view is displayed on the right side. Consequently, a 3-D video of microsurgery can be created by viewing the edited video by the cross-eyed stereogram viewing method without optical devices. The 3-D microsurgical video provides a more accurate view, especially with regard to depth, and a better understanding of microsurgical anatomy. Although several issues are yet to be addressed, this 3-D video system is a useful method of recording and presenting microsurgery for 3-D viewing with currently available equipment, without optical devices.