WorldWideScience

Sample records for underwater video camera

  1. Underwater video enhancement using multi-camera super-resolution

    Science.gov (United States)

    Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.

    2017-12-01

    Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.

  2. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  3. Student-Built Underwater Video and Data Capturing Device

    Science.gov (United States)

    Whitt, F.

    2016-12-01

    Students from Stockbridge High School Robotics Team invention is a low cost underwater video and data capturing device. This system is capable of shooting time-lapse photography and/or video for up to 3 days of video at a time. It can be used in remote locations without having to change batteries or adding additional external hard drives for data storage. The video capturing device has a unique base and mounting system which houses a pi drive and a programmable raspberry pi with a camera module. This system is powered by two 12 volt batteries, which makes it easier for users to recharge after use. Our data capturing device has the same unique base and mounting system as the underwater camera. The data capturing device consists of an Arduino and SD card shield that is capable of collecting continuous temperature and pH readings underwater. This data will then be logged onto the SD card for easy access and recording. The low cost underwater video and data capturing device can reach depths up to 100 meters while recording 36 hours of video on 1 terabyte of storage. It also features night vision infrared light capabilities. The cost to build our invention is $500. The goal of this was to provide a device that can easily be accessed by marine biologists, teachers, researchers and citizen scientists to capture photographic and water quality data in marine environments over extended periods of time.

  4. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Science.gov (United States)

    Shortis, Mark

    2015-12-07

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  5. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    Directory of Open Access Journals (Sweden)

    Mark Shortis

    2015-12-01

    Full Text Available Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  6. OceanVideoLab: A Tool for Exploring Underwater Video

    Science.gov (United States)

    Ferrini, V. L.; Morton, J. J.; Wiener, C.

    2016-02-01

    Video imagery acquired with underwater vehicles is an essential tool for characterizing seafloor ecosystems and seafloor geology. It is a fundamental component of ocean exploration that facilitates real-time operations, augments multidisciplinary scientific research, and holds tremendous potential for public outreach and engagement. Acquiring, documenting, managing, preserving and providing access to large volumes of video acquired with underwater vehicles presents a variety of data stewardship challenges to the oceanographic community. As a result, only a fraction of underwater video content collected with research submersibles is documented, discoverable and/or viewable online. With more than 1 billion users, YouTube offers infrastructure that can be leveraged to help address some of the challenges associated with sharing underwater video with a broad global audience. Anyone can post content to YouTube, and some oceanographic organizations, such as the Schmidt Ocean Institute, have begun live-streaming video directly from underwater vehicles. OceanVideoLab (oceanvideolab.org) was developed to help improve access to underwater video through simple annotation, browse functionality, and integration with related environmental data. Any underwater video that is publicly accessible on YouTube can be registered with OceanVideoLab by simply providing a URL. It is strongly recommended that a navigational file also be supplied to enable geo-referencing of observations. Once a video is registered, it can be viewed and annotated using a simple user interface that integrates observations with vehicle navigation data if provided. This interface includes an interactive map and a list of previous annotations that allows users to jump to times of specific observations in the video. Future enhancements to OceanVideoLab will include the deployment of a search interface, the development of an application program interface (API) that will drive the search and enable querying of

  7. Underwater television camera for monitoring inner side of pressure vessel

    International Nuclear Information System (INIS)

    Takayama, Kazuhiko.

    1997-01-01

    An underwater television support device equipped with a rotatable and vertically movable underwater television camera and an underwater television camera controlling device for monitoring images of the inside of the reactor core photographed by the underwater television camera to control the position of the underwater television camera and the underwater light are disposed on an upper lattice plate of a reactor pressure vessel. Both of them are electrically connected with each other by way of a cable to rapidly observe the inside of the reactor core by the underwater television camera. The reproducibility is extremely satisfactory by efficiently concentrating the position of the camera and image information upon inspection and observation. As a result, the steps for periodical inspection can be reduced to shorten the days for the periodical inspection. Since there is no requirement to withdraw fuel assemblies over a wide reactor core region, and the device can be used with the fuel assemblies being left as they are in the reactor, it is suitable for inspection of detectors for nuclear instrumentation. (N.H.)

  8. A trajectory observer for camera-based underwater motion measurements

    DEFF Research Database (Denmark)

    Berg, Tor; Jouffroy, Jerome; Johansen, Vegar

    This work deals with the issue of estimating the trajectory of a vehicle or object moving underwater based on camera measurements. The proposed approach consists of a diffusion-based trajectory observer (Jouffroy and Opderbecke, 2004) processing whole segments of a trajectory at a time. Additiona....... Additionally, the observer contains a Tikhonov regularizer for smoothing the estimates. Then, a method for including the camera measurements in an appropriate manner is proposed....

  9. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  10. Initial laboratory evaluation of color video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P L

    1991-01-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than identify an intruder. Monochrome cameras are adequate for that application and were selected over color cameras because of their greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Color information is useful for identification purposes, and color camera technology is rapidly changing. Thus, Sandia National Laboratories established an ongoing program to evaluate color solid-state cameras. Phase one resulted in the publishing of a report titled, Initial Laboratory Evaluation of Color Video Cameras (SAND--91-2579).'' It gave a brief discussion of imager chips and color cameras and monitors, described the camera selection, detailed traditional test parameters and procedures, and gave the results of the evaluation of twelve cameras. In phase two six additional cameras were tested by the traditional methods and all eighteen cameras were tested by newly developed methods. This report details both the traditional and newly developed test parameters and procedures, and gives the results of both evaluations.

  11. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  12. Face identification in videos from mobile cameras

    NARCIS (Netherlands)

    Mu, Meiru; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2014-01-01

    It is still challenging to recognize faces reliably in videos from mobile camera, although mature automatic face recognition technology for still images has been available for quite some time. Suppose we want to be alerted when suspects appear in the recording of a police Body-Cam, even a good face

  13. Baited remote underwater video system (BRUVs) survey of ...

    African Journals Online (AJOL)

    This is the first baited remote underwater video system (BRUVs) survey of the relative abundance, diversity and seasonal distribution of chondrichthyans in False Bay. Nineteen species from 11 families were recorded across 185 sites at between 4 and 49 m depth. Diversity was greatest in summer, on reefs and in shallow ...

  14. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  15. Jellyfish Identification Software for Underwater Laser Cameras (JTRACK

    Directory of Open Access Journals (Sweden)

    Patrizio Mariani

    2018-02-01

    Full Text Available Jellyfish can form erratic blooms in response to seasonal and irregular changes in environmental conditions with often large, transient effects on local ecosystem structure as well as effects on several sectors of the marine and maritime economy. Early warning systems able to detect conditions for jelly fish proliferation can enable management responses to mitigate such effects providing benefit to local ecosystems and economies. We propose here the creation of a research team in response to the EU call for proposal under the European Maritime and Fisheries Fund called “Blue Labs: innovative solutions for maritime challenges”. The project will establish a BLUELAB team with a strong cross-sectorial component that will benefit of the expertise of researchers in IT and Marine Biology, Computer Vision and embedded systems, which will work in collaboration with Industry and Policy maker to develop an early warning system using a new underwater imaging system based on Time of Flight Laser cameras. The camera will be combined to machine learning algorithm allowing autonomous early detection of jellyfish species (e.g. polyp, ephyra and planula stages. The team will develop the system and the companion software and will demonstrate its applications in real case conditions.

  16. Using underwater cameras to assess the effects of snorkeler and SCUBA diver presence on coral reef fish abundance, family richness, and species composition.

    Science.gov (United States)

    Dearden, P; Theberge, M; Yasué, M

    2010-04-01

    The results of underwater visual fish censuses (UVC) could be affected by fish changing their behavior in response to the snorkeler or diver conducting the survey. We used an underwater video camera to assess how fish abundance, family richness, and community composition were affected by the presence of snorkelers (n = 12) and self-contained underwater breathing apparatus (SCUBA) divers (n = 6) on a coral reef in Thailand. The total number of families, abundance of some fish families, and overall species composition showed significant differences before and during snorkeling disturbances. We did not detect significant and consistent changes to these parameters in the presence of a SCUBA diver; however, this could be a result of lower statistical power. We suggest that the use of a stationary video camera may help cross-check data that is collected through UVC to assess the true family composition and document the presence of rare and easily disturbed species.

  17. Deep-Sea Video Cameras Without Pressure Housings

    Science.gov (United States)

    Cunningham, Thomas

    2004-01-01

    Underwater video cameras of a proposed type (and, optionally, their light sources) would not be housed in pressure vessels. Conventional underwater cameras and their light sources are housed in pods that keep the contents dry and maintain interior pressures of about 1 atmosphere (.0.1 MPa). Pods strong enough to withstand the pressures at great ocean depths are bulky, heavy, and expensive. Elimination of the pods would make it possible to build camera/light-source units that would be significantly smaller, lighter, and less expensive. The depth ratings of the proposed camera/light source units would be essentially unlimited because the strengths of their housings would no longer be an issue. A camera according to the proposal would contain an active-pixel image sensor and readout circuits, all in the form of a single silicon-based complementary metal oxide/semiconductor (CMOS) integrated- circuit chip. As long as none of the circuitry and none of the electrical leads were exposed to seawater, which is electrically conductive, silicon integrated- circuit chips could withstand the hydrostatic pressure of even the deepest ocean. The pressure would change the semiconductor band gap by only a slight amount . not enough to degrade imaging performance significantly. Electrical contact with seawater would be prevented by potting the integrated-circuit chip in a transparent plastic case. The electrical leads for supplying power to the chip and extracting the video signal would also be potted, though not necessarily in the same transparent plastic. The hydrostatic pressure would tend to compress the plastic case and the chip equally on all sides; there would be no need for great strength because there would be no need to hold back high pressure on one side against low pressure on the other side. A light source suitable for use with the camera could consist of light-emitting diodes (LEDs). Like integrated- circuit chips, LEDs can withstand very large hydrostatic pressures. If

  18. Studying fish near ocean energy devices using underwater video

    Energy Technology Data Exchange (ETDEWEB)

    Matzner, Shari; Hull, Ryan E.; Harker-Klimes, Genevra EL; Cullinan, Valerie I.

    2017-09-18

    The effects of energy devices on fish populations are not well-understood, and studying the interactions of fish with tidal and instream turbines is challenging. To address this problem, we have evaluated algorithms to automatically detect fish in underwater video and propose a semi-automated method for ocean and river energy device ecological monitoring. The key contributions of this work are the demonstration of a background subtraction algorithm (ViBE) that detected 87% of human-identified fish events and is suitable for use in a real-time system to reduce data volume, and the demonstration of a statistical model to classify detections as fish or not fish that achieved a correct classification rate of 85% overall and 92% for detections larger than 5 pixels. Specific recommendations for underwater video acquisition to better facilitate automated processing are given. The recommendations will help energy developers put effective monitoring systems in place, and could lead to a standard approach that simplifies the monitoring effort and advances the scientific understanding of the ecological impacts of ocean and river energy devices.

  19. Intraoperative Video Production With a Head-Mounted Consumer Video Camera.

    Science.gov (United States)

    Avery, Matthew C

    2017-08-01

    The use of high-definition video in surgical education is becoming increasingly popular. Because of the availability of relatively inexpensive, consumer-grade video cameras, surgeons with minimal video production experience can produce high-quality surgical videos. A number of video capture methods are available, with varying degrees of production quality, economic constraint, and level of attention required from the operating surgeon. The accompanying video provides an overview of the advantages and disadvantages of several options and describes a technique for capturing intraoperative video with the use of a head-mounted, consumer video camera.

  20. Initial laboratory evaluation of color video cameras: Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  1. Detection of Visual Events in Underwater Video Using a Neuromorphic Saliency-based Attention System

    Science.gov (United States)

    Edgington, D. R.; Walther, D.; Cline, D. E.; Sherlock, R.; Salamy, K. A.; Wilson, A.; Koch, C.

    2003-12-01

    The Monterey Bay Aquarium Research Institute (MBARI) uses high-resolution video equipment on remotely operated vehicles (ROV) to obtain quantitative data on the distribution and abundance of oceanic animals. High-quality video data supplants the traditional approach of assessing the kinds and numbers of animals in the oceanic water column through towing collection nets behind ships. Tow nets are limited in spatial resolution, and often destroy abundant gelatinous animals resulting in species undersampling. Video camera-based quantitative video transects (QVT) are taken through the ocean midwater, from 50m to 4000m, and provide high-resolution data at the scale of the individual animals and their natural aggregation patterns. However, the current manual method of analyzing QVT video by trained scientists is labor intensive and poses a serious limitation to the amount of information that can be analyzed from ROV dives. Presented here is an automated system for detecting marine animals (events) visible in the videos. Automated detection is difficult due to the low contrast of many translucent animals and due to debris ("marine snow") cluttering the scene. Video frames are processed with an artificial intelligence attention selection algorithm that has proven a robust means of target detection in a variety of natural terrestrial scenes. The candidate locations identified by the attention selection module are tracked across video frames using linear Kalman filters. Typically, the occurrence of visible animals in the video footage is sparse in space and time. A notion of "boring" video frames is developed by detecting whether or not there is an interesting candidate object for an animal present in a particular sequence of underwater video -- video frames that do not contain any "interesting" events. If objects can be tracked successfully over several frames, they are stored as potentially "interesting" events. Based on low-level properties, interesting events are

  2. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    Science.gov (United States)

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  3. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    Directory of Open Access Journals (Sweden)

    Gustavo R D Bernardina

    Full Text Available Action sport cameras (ASC are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720 and 1.5mm (1920×1080. The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  4. Remote high-definition rotating video enables fast spatial survey of marine underwater macrofauna and habitats.

    Science.gov (United States)

    Pelletier, Dominique; Leleu, Kévin; Mallet, Delphine; Mou-Tham, Gérard; Hervé, Gilles; Boureau, Matthieu; Guilpart, Nicolas

    2012-01-01

    Observing spatial and temporal variations of marine biodiversity from non-destructive techniques is central for understanding ecosystem resilience, and for monitoring and assessing conservation strategies, e.g. Marine Protected Areas. Observations are generally obtained through Underwater Visual Censuses (UVC) conducted by divers. The problems inherent to the presence of divers have been discussed in several papers. Video techniques are increasingly used for observing underwater macrofauna and habitat. Most video techniques that do not need the presence of a diver use baited remote systems. In this paper, we present an original video technique which relies on a remote unbaited rotating remote system including a high definition camera. The system is set on the sea floor to record images. These are then analysed at the office to quantify biotic and abiotic sea bottom cover, and to identify and count fish species and other species like marine turtles. The technique was extensively tested in a highly diversified coral reef ecosystem in the South Lagoon of New Caledonia, based on a protocol covering both protected and unprotected areas in major lagoon habitats. The technique enabled to detect and identify a large number of species, and in particular fished species, which were not disturbed by the system. Habitat could easily be investigated through the images. A large number of observations could be carried out per day at sea. This study showed the strong potential of this non obtrusive technique for observing both macrofauna and habitat. It offers a unique spatial coverage and can be implemented at sea at a reasonable cost by non-expert staff. As such, this technique is particularly interesting for investigating and monitoring coastal biodiversity in the light of current conservation challenges and increasing monitoring needs.

  5. Remote high-definition rotating video enables fast spatial survey of marine underwater macrofauna and habitats.

    Directory of Open Access Journals (Sweden)

    Dominique Pelletier

    Full Text Available Observing spatial and temporal variations of marine biodiversity from non-destructive techniques is central for understanding ecosystem resilience, and for monitoring and assessing conservation strategies, e.g. Marine Protected Areas. Observations are generally obtained through Underwater Visual Censuses (UVC conducted by divers. The problems inherent to the presence of divers have been discussed in several papers. Video techniques are increasingly used for observing underwater macrofauna and habitat. Most video techniques that do not need the presence of a diver use baited remote systems. In this paper, we present an original video technique which relies on a remote unbaited rotating remote system including a high definition camera. The system is set on the sea floor to record images. These are then analysed at the office to quantify biotic and abiotic sea bottom cover, and to identify and count fish species and other species like marine turtles. The technique was extensively tested in a highly diversified coral reef ecosystem in the South Lagoon of New Caledonia, based on a protocol covering both protected and unprotected areas in major lagoon habitats. The technique enabled to detect and identify a large number of species, and in particular fished species, which were not disturbed by the system. Habitat could easily be investigated through the images. A large number of observations could be carried out per day at sea. This study showed the strong potential of this non obtrusive technique for observing both macrofauna and habitat. It offers a unique spatial coverage and can be implemented at sea at a reasonable cost by non-expert staff. As such, this technique is particularly interesting for investigating and monitoring coastal biodiversity in the light of current conservation challenges and increasing monitoring needs.

  6. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  7. Whose Line Sound is it Anyway? Identifying the Vocalizer on Underwater Video by Localizing with a Hydrophone Array

    Directory of Open Access Journals (Sweden)

    Matthias Hoffmann-Kuhnt

    2016-11-01

    Full Text Available A new device that combined high-resolution (1080p wide-angle video and three channels of high-frequency acoustic recordings (at 500 kHz per channel in a portable underwater housing was designed and tested with wild bottlenose and spotted dolphins in the Bahamas. It consisted of three hydrophones, a GoPro camera, a small Fit PC, a set of custom preamplifiers and a high-frequency data acquisition board. Recordings were obtained to identify individual vocalizing animals through time-delay-of-arrival localizing in post-processing. The calculated source positions were then overlaid onto the video – providing the ability to identify the vocalizing animal on the recorded video. The new tool allowed for much clearer analysis of the acoustic behavior of cetaceans than was possible before.

  8. Laser line scan underwater imaging by complementary metal-oxide-semiconductor camera

    Science.gov (United States)

    He, Zhiyi; Luo, Meixing; Song, Xiyu; Wang, Dundong; He, Ning

    2017-12-01

    This work employs the complementary metal-oxide-semiconductor (CMOS) camera to acquire images in a scanning manner for laser line scan (LLS) underwater imaging to alleviate backscatter impact of seawater. Two operating features of the CMOS camera, namely the region of interest (ROI) and rolling shutter, can be utilized to perform image scan without the difficulty of translating the receiver above the target as the traditional LLS imaging systems have. By the dynamically reconfigurable ROI of an industrial CMOS camera, we evenly divided the image into five subareas along the pixel rows and then scanned them by changing the ROI region automatically under the synchronous illumination by the fun beams of the lasers. Another scanning method was explored by the rolling shutter operation of the CMOS camera. The fun beam lasers were turned on/off to illuminate the narrow zones on the target in a good correspondence to the exposure lines during the rolling procedure of the camera's electronic shutter. The frame synchronization between the image scan and the laser beam sweep may be achieved by either the strobe lighting output pulse or the external triggering pulse of the industrial camera. Comparison between the scanning and nonscanning images shows that contrast of the underwater image can be improved by our LLS imaging techniques, with higher stability and feasibility than the mechanically controlled scanning method.

  9. Voice Controlled Stereographic Video Camera System

    Science.gov (United States)

    Goode, Georgianna D.; Philips, Michael L.

    1989-09-01

    For several years various companies have been developing voice recognition software. Yet, there are few applications of voice control in the robotics field and virtually no examples of voice controlled three dimensional (3-D) systems. In late 1987 ARD developed a highly specialized, voice controlled 3-D vision system for use in remotely controlled, non-tethered robotic applications. The system was designed as an operator's aid and incorporates features thought to be necessary or helpful in remotely maneuvering a vehicle. Foremost is the three dimensionality of the operator's console display. An image that provides normal depth perception cues over a range of depths greatly increases the ease with which an operator can drive a vehicle and investigate its environment. The availability of both vocal and manual control of all system functions allows the operator to guide the system according to his personal preferences. The camera platform can be panned +/-178 degrees and tilted +/-30 degrees for a full range of view of the vehicle's environment. The cameras can be zoomed and focused for close inspection of distant objects, while retaining substantial stereo effect by increasing the separation between the cameras. There is a ranging and measurement function, implemented through a graphical cursor, which allows the operator to mark objects in a scene to determine their relative positions. This feature will be helpful in plotting a driving path. The image seen on the screen is overlaid with icons and digital readouts which provide information about the position of the camera platform, the range to the graphical cursor and the measurement results. The cursor's "range" is actually the distance from the cameras to the object on which the cursor is resting. Other such features are included in the system and described in subsequent sections of this paper.

  10. Ball lightning observation: an objective video-camera analysis report

    OpenAIRE

    Sello, Stefano; Viviani, Paolo; Paganini, Enrico

    2011-01-01

    In this paper we describe a video-camera recording of a (probable) ball lightning event and both the related image and signal analyses for its photometric and dynamical characterization. The results strongly support the BL nature of the recorded luminous ball object and allow the researchers to have an objective and unique video document of a possible BL event for further analyses. Some general evaluations of the obtained results considering the proposed ball lightning models conclude the paper.

  11. CameraCast: flexible access to remote video sensors

    Science.gov (United States)

    Kong, Jiantao; Ganev, Ivan; Schwan, Karsten; Widener, Patrick

    2007-01-01

    New applications like remote surveillance and online environmental or traffic monitoring are making it increasingly important to provide flexible and protected access to remote video sensor devices. Current systems use application-level codes like web-based solutions to provide such access. This requires adherence to user-level APIs provided by such services, access to remote video information through given application-specific service and server topologies, and that the data being captured and distributed is manipulated by third party service codes. CameraCast is a simple, easily used system-level solution to remote video access. It provides a logical device API so that an application can identically operate on local vs. remote video sensor devices, using its own service and server topologies. In addition, the application can take advantage of API enhancements to protect remote video information, using a capability-based model for differential data protection that offers fine grain control over the information made available to specific codes or machines, thereby limiting their ability to violate privacy or security constraints. Experimental evaluations of CameraCast show that the performance of accessing remote video information approximates that of accesses to local devices, given sufficient networking resources. High performance is also attained when protection restrictions are enforced, due to an efficient kernel-level realization of differential data protection.

  12. Advances in pediatric gastroenterology: introducing video camera capsule endoscopy.

    Science.gov (United States)

    Siaw, Emmanuel O

    2006-04-01

    The video camera capsule endoscope is a gastrointestinal endoscope approved by the U.S. Food and Drug Administration in 2001 for use in diagnosing gastrointestinal disorders in adults. In 2003, the agency approved the device for use in children ages 10 and older, and the endoscope is currently in use at Arkansas Children's Hospital. A capsule camera, lens, battery, transmitter and antenna together record images of the small intestine as the endoscope makes its way through the bowel. The instrument is used with minimal risk to the patient while offering a high degree of accuracy in diagnosing small intestine disorders.

  13. Automated benthic counting of living and non-living components in Ngedarrak Reef, Palau via subsurface underwater video.

    Science.gov (United States)

    Marcos, Ma Shiela Angeli; David, Laura; Peñaflor, Eileen; Ticzon, Victor; Soriano, Maricor

    2008-10-01

    We introduce an automated benthic counting system in application for rapid reef assessment that utilizes computer vision on subsurface underwater reef video. Video acquisition was executed by lowering a submersible bullet-type camera from a motor boat while moving across the reef area. A GPS and echo sounder were linked to the video recorder to record bathymetry and location points. Analysis of living and non-living components was implemented through image color and texture feature extraction from the reef video frames and classification via Linear Discriminant Analysis. Compared to common rapid reef assessment protocols, our system can perform fine scale data acquisition and processing in one day. Reef video was acquired in Ngedarrak Reef, Koror, Republic of Palau. Overall success performance ranges from 60% to 77% for depths of 1 to 3 m. The development of an automated rapid reef classification system is most promising for reef studies that need fast and frequent data acquisition of percent cover of living and nonliving components.

  14. Teacher training for using digital video camera in primary education

    Directory of Open Access Journals (Sweden)

    Pablo García Sempere

    2011-12-01

    Full Text Available This paper shows the partial results of a research carried out in primary schools, which evaluates the ability of teachers in the use of digital video camera. The study took place in the province of Granada, Spain. Our purpose was to know the level of knowledge, interest, difficulties and training needs so as to improve the teaching practice. The work has been done from a descriptive and ecletic approach. Quantitative (questionnaire and qualitative techniques (focus group have been used in this research. The information obtained shows that most of the teachers have a lack of knowledge in the use of video camera and digital edition. On the other hand, the majority agrees to include initial and permanent training on this subject. Finally, the most important conclusions are presented.

  15. Teacher training for using digital video camera in primary education

    OpenAIRE

    Pablo García Sempere

    2011-01-01

    This paper shows the partial results of a research carried out in primary schools, which evaluates the ability of teachers in the use of digital video camera. The study took place in the province of Granada, Spain. Our purpose was to know the level of knowledge, interest, difficulties and training needs so as to improve the teaching practice. The work has been done from a descriptive and ecletic approach. Quantitative (questionnaire) and qualitative techniques (focus group) have been used in ...

  16. Optimal BRUVs (baited remote underwater video system) survey ...

    African Journals Online (AJOL)

    Marine protected areas (MPAs) play an important role in coastal conservation, but there is presently no uniformly applied methodology for monitoring the efficacy of coastal fish protection. Whereas underwater visual census and controlled angling surveys have been used, their skilled-labour requirements and environmental ...

  17. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    Science.gov (United States)

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro ® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro ® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro ® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro ® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro ® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  18. Simultaneous Camera Path Optimization and Distraction Removal for Improving Amateur Video.

    Science.gov (United States)

    Zhang, Fang-Lue; Wang, Jue; Zhao, Han; Martin, Ralph R; Hu, Shi-Min

    2015-12-01

    A major difference between amateur and professional video lies in the quality of camera paths. Previous work on video stabilization has considered how to improve amateur video by smoothing the camera path. In this paper, we show that additional changes to the camera path can further improve video aesthetics. Our new optimization method achieves multiple simultaneous goals: 1) stabilizing video content over short time scales; 2) ensuring simple and consistent camera paths over longer time scales; and 3) improving scene composition by automatically removing distractions, a common occurrence in amateur video. Our approach uses an L(1) camera path optimization framework, extended to handle multiple constraints. Two passes of optimization are used to address both low-level and high-level constraints on the camera path. The experimental and user study results show that our approach outputs video that is perceptually better than the input, or the results of using stabilization only.

  19. Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses

    Science.gov (United States)

    Liu, Rong; Unger, John A.; Scullion, Vicki A.

    2014-01-01

    Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

  20. Ground Validation Drop Camera Transect Points - St. Thomas/ St. John USVI - 2011

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video were collected between...

  1. Video astronomy on the go using video cameras with small telescopes

    CERN Document Server

    Ashley, Joseph

    2017-01-01

    Author Joseph Ashley explains video astronomy's many benefits in this comprehensive reference guide for amateurs. Video astronomy offers a wonderful way to see objects in far greater detail than is possible through an eyepiece, and the ability to use the modern, entry-level video camera to image deep space objects is a wonderful development for urban astronomers in particular, as it helps sidestep the issue of light pollution. The author addresses both the positive attributes of these cameras for deep space imaging as well as the limitations, such as amp glow. The equipment needed for imaging as well as how it is configured is identified with hook-up diagrams and photographs. Imaging techniques are discussed together with image processing (stacking and image enhancement). Video astronomy has evolved to offer great results and great ease of use, and both novices and more experienced amateurs can use this book to find the set-up that works best for them. Flexible and portable, they open up a whole new way...

  2. Underwater Communications for Video Surveillance Systems at 2.4 GHz

    Directory of Open Access Journals (Sweden)

    Sandra Sendra

    2016-10-01

    Full Text Available Video surveillance is needed to control many activities performed in underwater environments. The use of wired media can be a problem since the material specially designed for underwater environments is very expensive. In order to transmit the images and videos wirelessly under water, three main technologies can be used: acoustic waves, which do not provide high bandwidth, optical signals, although the effect of light dispersion in water severely penalizes the transmitted signals and therefore, despite offering high transfer rates, the maximum distance is very small, and electromagnetic (EM waves, which can provide enough bandwidth for video delivery. In the cases where the distance between transmitter and receiver is short, the use of EM waves would be an interesting option since they provide high enough data transfer rates to transmit videos with high resolution. This paper presents a practical study of the behavior of EM waves at 2.4 GHz in freshwater underwater environments. First, we discuss the minimum requirements of a network to allow video delivery. From these results, we measure the maximum distance between nodes and the round trip time (RTT value depending on several parameters such as data transfer rate, signal modulations, working frequency, and water temperature. The results are statistically analyzed to determine their relation. Finally, the EM waves’ behavior is modeled by a set of equations. The results show that there are some combinations of working frequency, modulation, transfer rate and temperature that offer better results than others. Our work shows that short communication distances with high data transfer rates is feasible.

  3. Underwater Communications for Video Surveillance Systems at 2.4 GHz.

    Science.gov (United States)

    Sendra, Sandra; Lloret, Jaime; Jimenez, Jose Miguel; Rodrigues, Joel J P C

    2016-10-23

    Video surveillance is needed to control many activities performed in underwater environments. The use of wired media can be a problem since the material specially designed for underwater environments is very expensive. In order to transmit the images and videos wirelessly under water, three main technologies can be used: acoustic waves, which do not provide high bandwidth, optical signals, although the effect of light dispersion in water severely penalizes the transmitted signals and therefore, despite offering high transfer rates, the maximum distance is very small, and electromagnetic (EM) waves, which can provide enough bandwidth for video delivery. In the cases where the distance between transmitter and receiver is short, the use of EM waves would be an interesting option since they provide high enough data transfer rates to transmit videos with high resolution. This paper presents a practical study of the behavior of EM waves at 2.4 GHz in freshwater underwater environments. First, we discuss the minimum requirements of a network to allow video delivery. From these results, we measure the maximum distance between nodes and the round trip time (RTT) value depending on several parameters such as data transfer rate, signal modulations, working frequency, and water temperature. The results are statistically analyzed to determine their relation. Finally, the EM waves' behavior is modeled by a set of equations. The results show that there are some combinations of working frequency, modulation, transfer rate and temperature that offer better results than others. Our work shows that short communication distances with high data transfer rates is feasible.

  4. Frequency Identification of Vibration Signals Using Video Camera Image Data

    Directory of Open Access Journals (Sweden)

    Chia-Hung Wu

    2012-10-01

    Full Text Available This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  5. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    International Nuclear Information System (INIS)

    Werry, S.M.

    1995-01-01

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure

  6. A comparison of camera trap and permanent recording video camera efficiency in wildlife underpasses.

    Science.gov (United States)

    Jumeau, Jonathan; Petrod, Lana; Handrich, Yves

    2017-09-01

    In the current context of biodiversity loss through habitat fragmentation, the effectiveness of wildlife crossings, installed at great expense as compensatory measures, is of vital importance for ecological and socio-economic actors. The evaluation of these structures is directly impacted by the efficiency of monitoring tools (camera traps…), which are used to assess the effectiveness of these crossings by observing the animals that use them. The aim of this study was to quantify the efficiency of camera traps in a wildlife crossing evaluation. Six permanent recording video systems sharing the same field of view as six Reconyx HC600 camera traps installed in three wildlife underpasses were used to assess the exact proportion of missed events ( event being the presence of an animal within the field of view), and the error rate concerning underpass crossing behavior (defined as either Entry or Refusal ). A sequence of photographs was triggered by either animals ( true trigger ) or artefacts ( false trigger ). We quantified the number of false triggers that had actually been caused by animals that were not visible on the images ("false" false triggers). Camera traps failed to record 43.6% of small mammal events (voles, mice, shrews, etc.) and 17% of medium-sized mammal events. The type of crossing behavior ( Entry or Refusal ) was incorrectly assessed in 40.1% of events, with a higher error rate for entries than for refusals. Among the 3.8% of false triggers, 85% of them were "false" false triggers. This study indicates a global underestimation of the effectiveness of wildlife crossings for small mammals. Means to improve the efficiency are discussed.

  7. Contact freezing observed with a high speed video camera

    Science.gov (United States)

    Hoffmann, Nadine; Koch, Michael; Kiselev, Alexei; Leisner, Thomas

    2017-04-01

    Freezing of supercooled cloud droplets on collision with ice nucleating particle (INP) has been considered as one of the most effective heterogeneous freezing mechanisms. Potentially, it could play an important role in rapid glaciation of a mixed phase cloud especially if coupled with ice multiplication mechanism active at moderate subzero temperatures. The necessary condition for such coupling would be, among others, the presence of very efficient INPs capable of inducing ice nucleation of the supercooled drizzle droplets in the temperature range of -5°C to -20°C. Some mineral dust particles (K-feldspar) and biogenic INPs (pseudomonas bacteria, birch pollen) have been recently identified as such very efficient INPs. However, if observed with a high speed video (HSV) camera, the contact nucleation induced by these two classes of INPs exhibits a very different behavior. Whereas bacterial INPs can induce freezing within a millisecond after initial contact with supercooled water, birch pollen need much more time to initiate freezing. The mineral dust particles seem to induce ice nucleation faster than birch pollen but slower than bacterial INPs. In this contribution we show the HSV records of individual supercooled droplets suspended in an electrodynamic balance and colliding with airborne INPs of various types. The HSV camera is coupled with a long-working-distance microscope, allowing us to observe the contact nucleation of ice at very high spatial and temporal resolution. The average time needed to initiate freezing has been measured depending on the INP species. This time do not necessarily correlate with the contact freezing efficiency of the ice nucleating particles. We discuss possible mechanisms explaining this behavior and potential implications for future ice nucleation research.

  8. Automatic Level Control for Video Cameras towards HDR Techniques

    Directory of Open Access Journals (Sweden)

    de With PeterHN

    2010-01-01

    Full Text Available We give a comprehensive overview of the complete exposure processing chain for video cameras. For each step of the automatic exposure algorithm we discuss some classical solutions and propose their improvements or give new alternatives. We start by explaining exposure metering methods, describing types of signals that are used as the scene content descriptors as well as means to utilize these descriptors. We also discuss different exposure control types used for the control of lens, integration time of the sensor, and gain control, such as a PID control, precalculated control based on the camera response function, and propose a new recursive control type that matches the underlying image formation model. Then, a description of commonly used serial control strategy for lens, sensor exposure time, and gain is presented, followed by a proposal of a new parallel control solution that integrates well with tone mapping and enhancement part of the image pipeline. Parallel control strategy enables faster and smoother control and facilitates optimally filling the dynamic range of the sensor to improve the SNR and an image contrast, while avoiding signal clipping. This is archived by the proposed special control modes used for better display and correct exposure of both low-dynamic range and high-dynamic range images. To overcome the inherited problems of limited dynamic range of capturing devices we discuss a paradigm of multiple exposure techniques. Using these techniques we can enable a correct rendering of difficult class of high-dynamic range input scenes. However, multiple exposure techniques bring several challenges, especially in the presence of motion and artificial light sources such as fluorescent lights. In particular, false colors and light-flickering problems are described. After briefly discussing some known possible solutions for the motion problem, we focus on solving the fluorescence-light problem. Thereby, we propose an algorithm for

  9. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    Directory of Open Access Journals (Sweden)

    Semi Jeon

    2017-02-01

    Full Text Available Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i robust feature detection using particle keypoints between adjacent frames; (ii camera path estimation and smoothing; and (iii rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV. The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.

  10. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    Science.gov (United States)

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-01-01

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622

  11. Acceptance/operational test procedure 241-AN-107 Video Camera System

    International Nuclear Information System (INIS)

    Pedersen, L.T.

    1994-01-01

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer's specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights

  12. UHD Video Transmission over Bi-Directional Underwater Wireless Optical Communication

    KAUST Repository

    Al-Halafi, Abdullah

    2018-04-02

    In this paper, we experimentally demonstrate for the first time a bi-directional underwater wireless optical communication system that is capable of transmitting an ultra high definition real-time video using a downlink channel while simultaneously receiving the feedback messages on the uplink channel. The links extend up to 4.5 m using QPSK, 16-QAM and 64-QAM modulations. The system is built using software defined platforms connected to TO-9 packaged pigtailed 520 nm directly modulated green laser diode (LD) with 1.2 GHz bandwidth as the optical transmitter for video streaming on the downlink, and an avalanche photodiode (APD) module as the downlink receiver. The uplink channel is connected to another pigtailed 450 nm directly modulated blue LD with 1.2 GHz bandwidth as the optical uplink transmitter for the feedback channel, and to a second APD as the uplink receiver. We perform laboratory experiments on different water types. The measured throughput is 15 Mbps for QPSK, and 30 Mbps for both 16-QAM and 64-QAM. We evaluate the quality of the received live video streams using Peak Signal-to-Noise Ratio and achieve values up to 16 dB for 64-QAM when streaming UHD video in harbor II water and 22 dB in clear ocean.

  13. Feasibility of Using Video Camera for Automated Enforcement on Red-Light Running and Managed Lanes.

    Science.gov (United States)

    2009-12-25

    The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and HOV occupancy requirement using video cameras in Nevada. This objective was a...

  14. Feasibility of Using Video Cameras for Automated Enforcement on Red-Light Running and Managed Lanes.

    Science.gov (United States)

    2009-12-01

    The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and high occupancy vehicle (HOV) occupancy requirement using video cameras in Nev...

  15. Development of a 3D Flash LADAR Video Camera for Entry, Decent and Landing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced Scientific Concepts, Inc. (ASC) has developed a 128 x 128 frame, 3D Flash LADAR video camera capable of a 30 Hz frame rate. Because Flash LADAR captures an...

  16. Development of a 3D Flash LADAR Video Camera for Entry, Decent, and Landing, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced Scientific Concepts, Inc. (ASC) has developed a 128 x 128 frame, 3D Flash LADAR video camera which produces 3-D point clouds at 30 Hz. Flash LADAR captures...

  17. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1995-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser

  18. NOAA Polyline Shapefile - Drop Camera transects, US Caribbean Virgin Passage and St. John Shelf - Project NF-03-10-USVI-HAB - (2010), UTM 20N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video was collected between...

  19. NOAA Shapefile - Drop Camera Transects Lines, USVI 2011 , Seafloor Characterization of the US Caribbean - Nancy Foster - NF-11-1 (2011), UTM 20N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video was collected between...

  20. NOAA Point Shapefile - Drop Camera transects, US Caribbean - Western Puerto Rico - Project NF-07-06-USVI-HAB - (2007), UTM 19N NAD83 (NCEI Accession 0131853)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater photos and/or video that were collected by NOAA scientists using a drop camera system. Photos and/or video were...

  1. NOAA Point Shapefile - Drop Camera transects, US Caribbean - Western Puerto Rico - Project NF-07-06-USVI-HAB - (2007), UTM 19N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater photos and/or video that were collected by NOAA scientists using a drop camera system. Photos and/or video were...

  2. NOAA Polyline Shapefile - Drop Camera transects, US Caribbean - Western Puerto Rico - Project NF-07-06-USVI-HAB - (2007), UTM 19N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater photos and/or video that were collected by NOAA scientists using a drop camera system. Photos and/or video were...

  3. NOAA Polyline Shapefile - Drop Camera transects, US Caribbean - Western Puerto Rico - Project NF-07-06-USVI-HAB - (2007), UTM 19N NAD83 (NCEI Accession 0131853)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater photos and/or video that were collected by NOAA scientists using a drop camera system. Photos and/or video were...

  4. Digital video technology and production 101: lights, camera, action.

    Science.gov (United States)

    Elliot, Diane L; Goldberg, Linn; Goldberg, Michael J

    2014-01-01

    Videos are powerful tools for enhancing the reach and effectiveness of health promotion programs. They can be used for program promotion and recruitment, for training program implementation staff/volunteers, and as elements of an intervention. Although certain brief videos may be produced without technical assistance, others often require collaboration and contracting with professional videographers. To get practitioners started and to facilitate interactions with professional videographers, this Tool includes a guide to the jargon of video production and suggestions for how to integrate videos into health education and promotion work. For each type of video, production principles and issues to consider when working with a professional videographer are provided. The Tool also includes links to examples in each category of video applications to health promotion.

  5. video115_0403 -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  6. video114_0402b -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  7. video114_0402c -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  8. -NOAA Shapefile - Drop Camera Transects Lines, USVI 2011 , Seafloor Characterization of the US Caribbean - Nancy Foster - NF-11-1 (2011), UTM 20N NAD83 (NCEI Accession 0131858)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video was collected between...

  9. NOAA Line Shapefile- Locations of Phantom S2 ROV Underwater Video Transects, US Virgin Islands, Project NF-05-05, 2005, UTM 20N WGS84

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a line shapefile showing the trackline of various Remotely Operated Vehicle (ROV) underwater video transects in the US Virgin Islands.NOAA's...

  10. NOAA Line Shapefile- Locations of Phantom S2 ROV Underwater Video Transects, US Virgin Islands, Project NF-06-03, 2006, UTM 20N WGS84

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a line shapefile showing the trackline of various Remotely Operated Vehicle (ROV) underwater video transects in the US Virgin Islands and...

  11. Using underwater video to evaluate the performance of the Fukui trap as a mitigation tool for the invasive European green crab (Carcinus maenas) in Newfoundland, Canada

    Science.gov (United States)

    McKenzie, Cynthia H.; Best, Kiley; Zargarpour, Nicola; Favaro, Brett

    2018-01-01

    The European green crab (Carcinus maenas) is a destructive marine invader that was first discovered in Newfoundland waters in 2007 and has since become established in nearshore ecosystems on the south and west coast of the island. Targeted fishing programs aimed at removing green crabs from invaded Newfoundland ecosystems use Fukui traps, but the capture efficiency of these traps has not been previously assessed. We assessed Fukui traps using in situ observation with underwater video cameras as they actively fished for green crabs. From these videos, we recorded the number of green crabs that approached the trap, the outcome of each entry attempt (success or failure), and the number of exits from the trap. Across eight videos, we observed 1,226 green crab entry attempts, with only a 16% rate of success from these attempts. Based on these observations we believe there is scope to improve the performance of the Fukui trap through modifications in order to achieve a higher catch per unit effort (CPUE), maximizing trap usage for mitigation. Ultimately, a more efficient Fukui trap will help to control green crab populations in order to preserve the function and integrity of ecosystems invaded by the green crab. PMID:29340237

  12. Real-time pedestrian detection with the videos of car camera

    Directory of Open Access Journals (Sweden)

    Yunling Zhang

    2015-12-01

    Full Text Available Pedestrians in the vehicle path are in danger of being hit, thus causing severe injury to pedestrians and vehicle occupants. Therefore, real-time pedestrian detection with the video of vehicle-mounted camera is of great significance to vehicle–pedestrian collision warning and traffic safety of self-driving car. In this article, a real-time scheme was proposed based on integral channel features and graphics processing unit. The proposed method does not need to resize the input image. Moreover, the computationally expensive convolution of the detectors and the input image was converted into the dot product of two larger matrixes, which can be computed effectively using a graphics processing unit. The experiments showed that the proposed method could be employed to detect pedestrians in the video of car camera at 20+ frames per second with acceptable error rates. Thus, it can be applied in real-time detection tasks with the videos of car camera.

  13. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    Science.gov (United States)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  14. Camera Networks The Acquisition and Analysis of Videos over Wide Areas

    CERN Document Server

    Roy-Chowdhury, Amit K

    2012-01-01

    As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide

  15. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  16. Using hand-held point and shoot video cameras in clinical education.

    Science.gov (United States)

    Stoten, Sharon

    2011-02-01

    Clinical educators are challenged to design and implement creative instructional strategies to provide employees with optimal clinical practice learning opportunities. Using hand-held video cameras to capture patient encounters or skills demonstrations involves employees in active learning and can increase dialogue between employees and clinical educators. The video that is created also can be used for evaluation and feedback. Hands-on experiences may energize employees with different talents and styles of learning. Copyright 2011, SLACK Incorporated.

  17. GoPro Hero Cameras for Creation of a Three-Dimensional, Educational, Neurointerventional Video.

    Science.gov (United States)

    Park, Min S; Brock, Andrea; Mortimer, Vance; Taussky, Philipp; Couldwell, William T; Quigley, Edward

    2017-10-01

    Neurointerventional education relies on an apprenticeship model, with the trainee observing and participating in procedures with the guidance of a mentor. While educational videos are becoming prevalent in surgical cases, there is a dearth of comparable educational material for trainees in neurointerventional programs. We sought to create a high-quality, three-dimensional video of a routine diagnostic cerebral angiogram for use as an educational tool. A diagnostic cerebral angiogram was recorded using two GoPro HERO 3+ cameras with the Dual HERO System to capture the proceduralist's hands during the case. This video was edited with recordings from the video monitors to create a real-time three-dimensional video of both the actions of the neurointerventionalist and the resulting wire/catheter movements. The final edited video, in either two or three dimensions, can serve as another instructional tool for the training of residents and/or fellows. Additional videos can be created in a similar fashion of more complicated neurointerventional cases. The GoPro HERO 3+ camera and Dual HERO System can be used to create educational videos of neurointerventional procedures.

  18. Acceptance/operational test procedure 101-AW tank camera purge system and 101-AW video camera system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1994-01-01

    This procedure will document the satisfactory operation of the 101-AW Tank Camera Purge System (CPS) and the 101-AW Video Camera System. The safety interlock which shuts down all the electronics inside the 101-AW vapor space, during loss of purge pressure, will be in place and tested to ensure reliable performance. This procedure is separated into four sections. Section 6.1 is performed in the 306 building prior to delivery to the 200 East Tank Farms and involves leak checking all fittings on the 101-AW Purge Panel for leakage using a Snoop solution and resolving the leakage. Section 7.1 verifies that PR-1, the regulator which maintains a positive pressure within the volume (cameras and pneumatic lines), is properly set. In addition the green light (PRESSURIZED) (located on the Purge Control Panel) is verified to turn on above 10 in. w.g. and after the time delay (TDR) has timed out. Section 7.2 verifies that the purge cycle functions properly, the red light (PURGE ON) comes on, and that the correct flowrate is obtained to meet the requirements of the National Fire Protection Association. Section 7.3 verifies that the pan and tilt, camera, associated controls and components operate correctly. This section also verifies that the safety interlock system operates correctly during loss of purge pressure. During the loss of purge operation the illumination of the amber light (PURGE FAILED) will be verified

  19. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    Science.gov (United States)

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  20. Optimization of radiation sensors for a passive terahertz video camera for security applications

    NARCIS (Netherlands)

    Zieger, G.J.M.

    2014-01-01

    A passive terahertz video camera allows for fast security screenings from distances of several meters. It avoids irradiation or the impressions of nakedness, which oftentimes cause embarrassment and trepidation of the concerned persons. This work describes the optimization of highly sensitive

  1. Localization of cask and plug remote handling system in ITER using multiple video cameras

    International Nuclear Information System (INIS)

    Ferreira, João; Vale, Alberto; Ribeiro, Isabel

    2013-01-01

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building

  2. Video content analysis on body-worn cameras for retrospective investigation

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; ter Haar, Frank B.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Burghouts, Gertjan J.; Wijn, Remco; van den Broek, Sebastiaan P.; van Rest, Jeroen H. C.

    2015-10-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications and effects, such as the reduction of violence between police and citizens. However, the increased use of bodycams also creates potential challenges. For example: how can end-users extract information from the abundance of video, how can the information be presented, and how can an officer retrieve information efficiently? Nevertheless, such video gives the opportunity to stimulate the professionals' memory, and support complete and accurate reporting. In this paper, we show how video content analysis (VCA) can address these challenges and seize these opportunities. To this end, we focus on methods for creating a complete summary of the video, which allows quick retrieval of relevant fragments. The content analysis for summarization consists of several components, such as stabilization, scene selection, motion estimation, localization, pedestrian tracking and action recognition in the video from a bodycam. The different components and visual representations of summaries are presented for retrospective investigation.

  3. A review of techniques for the identification and measurement of fish in underwater stereo-video image sequences

    Science.gov (United States)

    Shortis, Mark R.; Ravanbakskh, Mehdi; Shaifat, Faisal; Harvey, Euan S.; Mian, Ajmal; Seager, James W.; Culverhouse, Philip F.; Cline, Danelle E.; Edgington, Duane R.

    2013-04-01

    Underwater stereo-video measurement systems are used widely for counting and measuring fish in aquaculture, fisheries and conservation management. To determine population counts, spatial or temporal frequencies, and age or weight distributions, snout to fork length measurements are captured from the video sequences, most commonly using a point and click process by a human operator. Current research aims to automate the measurement and counting task in order to improve the efficiency of the process and expand the use of stereo-video systems within marine science. A fully automated process will require the detection and identification of candidates for measurement, followed by the snout to fork length measurement, as well as the counting and tracking of fish. This paper presents a review of the techniques used for the detection, identification, measurement, counting and tracking of fish in underwater stereo-video image sequences, including consideration of the changing body shape. The review will analyse the most commonly used approaches, leading to an evaluation of the techniques most likely to be a general solution to the complete process of detection, identification, measurement, counting and tracking.

  4. An Alignment Method for the Integration of Underwater 3D Data Captured by a Stereovision System and an Acoustic Camera.

    Science.gov (United States)

    Lagudi, Antonio; Bianco, Gianfranco; Muzzupappa, Maurizio; Bruno, Fabio

    2016-04-14

    The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera.

  5. An Alignment Method for the Integration of Underwater 3D Data Captured by a Stereovision System and an Acoustic Camera

    Directory of Open Access Journals (Sweden)

    Antonio Lagudi

    2016-04-01

    Full Text Available The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera.

  6. Comparing relative abundance, lengths, and habitat of temperate reef fishes using simultaneous underwater visual census, video, and trap sampling

    KAUST Repository

    Bacheler, NM

    2017-04-28

    Unbiased counts of individuals or species are often impossible given the prevalence of cryptic or mobile species. We used 77 simultaneous multi-gear deployments to make inferences about relative abundance, diversity, length composition, and habitat of the reef fish community along the southeastern US Atlantic coast. In total, 117 taxa were observed by underwater visual census (UVC), stationary video, and chevron fish traps, with more taxa being observed by UVC (100) than video (82) or traps (20). Frequency of occurrence of focal species was similar among all sampling approaches for tomtate Haemulon aurolineatum and black sea bass Centropristis striata, higher for UVC and video compared to traps for red snapper Lutjanus campechanus, vermilion snapper Rhomboplites aurorubens, and gray triggerfish Balistes capriscus, and higher for UVC compared to video or traps for gray snapper L. griseus and lionfish Pterois spp. For 6 of 7 focal species, correlations of relative abundance among gears were strongest between UVC and video, but there was substantial variability among species. The number of recorded species between UVC and video was correlated (ρ = 0.59), but relationships between traps and the other 2 methods were weaker. Lengths of fish visually estimated by UVC were similar to lengths of fish caught in traps, as were habitat characterizations from UVC and video. No gear provided a complete census for any species in our study, suggesting that analytical methods accounting for imperfect detection are necessary to make unbiased inferences about fish abundance.

  7. Algorithms for the automatic identification of MARFEs and UFOs in JET database of visible camera videos

    International Nuclear Information System (INIS)

    Murari, A.; Camplani, M.; Cannas, B.; Usai, P.; Mazon, D.; Delaunay, F.

    2010-01-01

    MARFE instabilities and UFOs leave clear signatures in JET fast visible camera videos. Given the potential harmful consequences of these events, particularly as triggers of disruptions, it would be important to have the means of detecting them automatically. In this paper, the results of various algorithms to identify automatically the MARFEs and UFOs in JET visible videos are reported. The objective is to retrieve the videos, which have captured these events, exploring the whole JET database of images, as a preliminary step to the development of real-time identifiers in the future. For the detection of MARFEs, a complete identifier has been finalized, using morphological operators and Hu moments. The final algorithm manages to identify the videos with MARFEs with a success rate exceeding 80%. Due to the lack of a complete statistics of examples, the UFO identifier is less developed, but a preliminary code can detect UFOs quite reliably. (authors)

  8. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  9. Explaining the catch efficiency of different cod pots using underwater video to observe cod entry and exit behaviour

    DEFF Research Database (Denmark)

    Hedgärde, Maria; Berg, Casper Willestofte; Kindt-Larsen, Lotte

    2016-01-01

    to determine which of these factors most affected the pots’ catch per unit effort (CPUE). Two fishing trials were conducted off the coast of Bornholm, Denmark, using six pot types with different design features, equipped with underwater camera systems to record the behaviour of the cod in relation to the pots....... Four pot types were floating pots with one entrance and two were bottom standing with three entrances. Different pot types showed significantly different CPUEs and the pot type was an explanatory factor for entry and exit rates for both trials. In trial 1 artificial light was used for filming...

  10. Bathymetric and underwater video survey of Lower Granite Reservoir and vicinity, Washington and Idaho, 2009-10

    Science.gov (United States)

    Williams, Marshall L.; Fosness, Ryan L.; Weakland, Rhonda J.

    2012-01-01

    The U.S. Geological Survey conducted a bathymetric survey of the Lower Granite Reservoir, Washington, using a multibeam echosounder, and an underwater video mapping survey during autumn 2009 and winter 2010. The surveys were conducted as part of the U.S. Army Corps of Engineer's study on sediment deposition and control in the reservoir. The multibeam echosounder survey was performed in 1-mile increments between river mile (RM) 130 and 142 on the Snake River, and between RM 0 and 2 on the Clearwater River. The result of the survey is a digital elevation dataset in ASCII coordinate positioning data (easting, northing, and elevation) useful in rendering a 3×3-foot point grid showing bed elevation and reservoir geomorphology. The underwater video mapping survey was conducted from RM 107.73 to 141.78 on the Snake River and RM 0 to 1.66 on the Clearwater River, along 61 U.S. Army Corps of Engineers established cross sections, and dredge material deposit transects. More than 900 videos and 90 bank photographs were used to characterize the sediment facies and ground-truth the multibeam echosounder data. Combined, the surveys were used to create a surficial sediment facies map that displays type of substrate, level of embeddedness, and presence of silt.

  11. A novel method to reduce time investment when processing videos from camera trap studies.

    Science.gov (United States)

    Swinnen, Kristijn R R; Reijniers, Jonas; Breno, Matteo; Leirs, Herwig

    2014-01-01

    Camera traps have proven very useful in ecological, conservation and behavioral research. Camera traps non-invasively record presence and behavior of animals in their natural environment. Since the introduction of digital cameras, large amounts of data can be stored. Unfortunately, processing protocols did not evolve as fast as the technical capabilities of the cameras. We used camera traps to record videos of Eurasian beavers (Castor fiber). However, a large number of recordings did not contain the target species, but instead empty recordings or other species (together non-target recordings), making the removal of these recordings unacceptably time consuming. In this paper we propose a method to partially eliminate non-target recordings without having to watch the recordings, in order to reduce workload. Discrimination between recordings of target species and non-target recordings was based on detecting variation (changes in pixel values from frame to frame) in the recordings. Because of the size of the target species, we supposed that recordings with the target species contain on average much more movements than non-target recordings. Two different filter methods were tested and compared. We show that a partial discrimination can be made between target and non-target recordings based on variation in pixel values and that environmental conditions and filter methods influence the amount of non-target recordings that can be identified and discarded. By allowing a loss of 5% to 20% of recordings containing the target species, in ideal circumstances, 53% to 76% of non-target recordings can be identified and discarded. We conclude that adding an extra processing step in the camera trap protocol can result in large time savings. Since we are convinced that the use of camera traps will become increasingly important in the future, this filter method can benefit many researchers, using it in different contexts across the globe, on both videos and photographs.

  12. A passive terahertz video camera based on lumped element kinetic inductance detectors

    Energy Technology Data Exchange (ETDEWEB)

    Rowe, Sam, E-mail: sam.rowe@astro.cf.ac.uk; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; House, Julian; Moseley, Paul; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian [Astronomy Instrumentation Group, School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Wood, Ken [QMC Instruments Ltd., School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Grainger, William [Rutherford Appleton Laboratory, STFC, Swindon SN2 1SZ (United Kingdom); Mauskopf, Philip [Astronomy Instrumentation Group, School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); School of Earth Science and Space Exploration, Arizona State University, Tempe, Arizona 85281 (United States); Spencer, Locke [Department of Physics and Astronomy, University of Lethbridge, Lethbridge, Alberta T1K 3M4 (Canada)

    2016-03-15

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  13. NOAA Point Shapefile - Drop Camera transects, US Caribbean - Virgin Passage and St. John Shelf - Project NF-03-10-USVI-HAB - (2010), UTM 20N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. For more information about this...

  14. Underwater sympathetic detonation of pellet explosive

    Science.gov (United States)

    Kubota, Shiro; Saburi, Tei; Nagayama, Kunihito

    2017-06-01

    The underwater sympathetic detonation of pellet explosives was taken by high-speed photography. The diameter and the thickness of the pellet were 20 and 10 mm, respectively. The experimental system consists of the precise electric detonator, two grams of composition C4 booster and three pellets, and these were set in water tank. High-speed video camera, HPV-X made by Shimadzu was used with 10 Mfs. The underwater explosions of the precise electric detonator, the C4 booster and a pellet were also taken by high-speed photography to estimate the propagation processes of the underwater shock waves. Numerical simulation of the underwater sympathetic detonation of the pellet explosives was also carried out and compared with experiment.

  15. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    Energy Technology Data Exchange (ETDEWEB)

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  16. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  17. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  18. Monitoring the body temperature of cows and calves using video recordings from an infrared thermography camera.

    Science.gov (United States)

    Hoffmann, Gundula; Schmidt, Mariana; Ammon, Christian; Rose-Meierhöfer, Sandra; Burfeind, Onno; Heuwieser, Wolfgang; Berg, Werner

    2013-06-01

    The aim of this study was to assess the variability of temperatures measured by a video-based infrared camera (IRC) in comparison to rectal and vaginal temperatures. The body surface temperatures of cows and calves were measured contactless at different body regions using videos from the IRC. Altogether, 22 cows and 9 calves were examined. The differences of the measured IRC temperatures among the body regions, i.e. eye (mean: 37.0 °C), back of the ear (35.6 °C), shoulder (34.9 °C) and vulva (37.2 °C), were significant (P infrared thermography videos has the advantage to analyze more than 1 picture per animal in a short period of time, and shows potential as a monitoring system for body temperatures in cattle.

  19. Video production with a DSLR camera : A Guide to Video Production for Smaller Companies

    OpenAIRE

    Koistinen, Matias

    2014-01-01

    Videos and infographics have become a part of our everyday life in the form of Youtube and other social media. Because of their popularity more and more companies use motion graphics as a marketing method. In advertising and web content motion graphics is taking over still image in a very fast pace, due to the bigger information capacity of motion graphics and the capability of using this bigger content thanks to the fast internet connections of today. With social media, webpages, blogs and a...

  20. Utilization of an video camera in study of the goshawk (Accipiter gentilis diet

    Directory of Open Access Journals (Sweden)

    Martin Tomešek

    2011-01-01

    Full Text Available In 2009, research was carried out into the food spectrum of goshawk (Accipiter gentilis by means of automatic digital video cameras with a recoding device in the area of the Chřiby Upland. The monitoring took place at two localities in the vicinity of the village of Buchlovice at the southeastern edge of the Chřiby Upland in a period from hatching the chicks to their flying out from a nest. The unambiguous advantage of using the camera systems at the study of food spectrum is a possibility of the exact determination of brought preys in the majority of cases. As much as possible economic and effective technology prepared according to given conditions was used. Results of using automatic digital video cameras with a recoding device consist in a number of valuable data, which clarify the food spectrum of a given species. The main output of the whole project is determination of the food spectrum of goshawk (Accipiter gentilis from two localities, which showed the following composition: 89 % birds, 9.5 % mammals and 1.5 % other animals or unidentifiable components of food. Birds of the genus Turdus were the most frequent prey in both cases of monitoring. As for mammals, Sciurus vulgaris was most frequent.

  1. A compact high-definition low-cost digital stereoscopic video camera for rapid robotic surgery development.

    Science.gov (United States)

    Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C

    2012-01-01

    Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.

  2. NOAA Point Shapefile - Drop Camera transects, US Caribbean – Virgin Passage and St. John Shelf - Project NF-03-10-USVI-HAB - (2010), UTM 20N NAD83 (NCEI Accession 0131854)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. For more information about this...

  3. VideoWeb Dataset for Multi-camera Activities and Non-verbal Communication

    Science.gov (United States)

    Denina, Giovanni; Bhanu, Bir; Nguyen, Hoang Thanh; Ding, Chong; Kamal, Ahmed; Ravishankar, Chinya; Roy-Chowdhury, Amit; Ivers, Allen; Varda, Brenda

    Human-activity recognition is one of the most challenging problems in computer vision. Researchers from around the world have tried to solve this problem and have come a long way in recognizing simple motions and atomic activities. As the computer vision community heads toward fully recognizing human activities, a challenging and labeled dataset is needed. To respond to that need, we collected a dataset of realistic scenarios in a multi-camera network environment (VideoWeb) involving multiple persons performing dozens of different repetitive and non-repetitive activities. This chapter describes the details of the dataset. We believe that this VideoWeb Activities dataset is unique and it is one of the most challenging datasets available today. The dataset is publicly available online at http://vwdata.ee.ucr.edu/ along with the data annotation.

  4. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    Science.gov (United States)

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  5. A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities

    Directory of Open Access Journals (Sweden)

    Antonio Sánchez-Esguevillas

    2012-08-01

    Full Text Available This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  6. System design description for the LDUA high resolution stereoscopic video camera system (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS), system 6230, was designed to be used as an end effector on the LDUA to perform surveillance and inspection activities within a waste tank. It is attached to the LDUA by means of a Tool Interface Plate (TIP) which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate. Designed to perform up close weld and corrosion inspection roles in US T operations, the HRSVS will support and supplement the Light Duty Utility Arm (LDUA) and provide the crucial inspection tasks needed to ascertain waste tank condition

  7. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    Energy Technology Data Exchange (ETDEWEB)

    Pardini, A.F.

    1998-01-27

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing.

  8. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing

  9. Visual fatigue modeling for stereoscopic video shot based on camera motion

    Science.gov (United States)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  10. Complex effusive events at Kilauea as documented by the GOES satellite and remote video cameras

    Science.gov (United States)

    Harris, A.J.L.; Thornber, C.R.

    1999-01-01

    GOES provides thermal data for all of the Hawaiian volcanoes once every 15 min. We show how volcanic radiance time series produced from this data stream can be used as a simple measure of effusive activity. Two types of radiance trends in these time series can be used to monitor effusive activity: (a) Gradual variations in radiance reveal steady flow-field extension and tube development. (b) Discrete spikes correlate with short bursts of activity, such as lava fountaining or lava-lake overflows. We are confident that any effusive event covering more than 10,000 m2 of ground in less than 60 min will be unambiguously detectable using this approach. We demonstrate this capability using GOES, video camera and ground-based observational data for the current eruption of Kilauea volcano (Hawai'i). A GOES radiance time series was constructed from 3987 images between 19 June and 12 August 1997. This time series displayed 24 radiance spikes elevated more than two standard deviations above the mean; 19 of these are correlated with video-recorded short-burst effusive events. Less ambiguous events are interpreted, assessed and related to specific volcanic events by simultaneous use of permanently recording video camera data and ground-observer reports. The GOES radiance time series are automatically processed on data reception and made available in near-real-time, so such time series can contribute to three main monitoring functions: (a) automatically alerting major effusive events; (b) event confirmation and assessment; and (c) establishing effusive event chronology.

  11. Autonomous Underwater Vehicle untuk Survei dan Pemantauan Laut

    Directory of Open Access Journals (Sweden)

    Henry M. Manik

    2017-04-01

    Full Text Available AUV is an unmanned submersible platform to accomplish a mission. Side-scan sonar, Conductivity Temperature Depth (CTD, and underwater video camera are usually attached on AUV. These sensors were used for identifying seawater and seabed condition. Data acquired from a survey with an AUV in Kepulauan Riau processed by Neptus software. Side-scan sonar (SSS visualization is compared to the video image. SSS signal visualization has a unique pattern that can be identified within the video image. Different substrate structure caused different signal visualization.  The relation between the video image and SSS visualization can be used for identifying habitat benthic profile.

  12. vid116_0501n -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  13. vid116_0501s -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  14. vid116_0501d -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  15. vid116_0501c -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  16. NOAA Line Shapefile- Locations of Phantom S2 ROV Underwater Video Transects, US Virgin Islands, Project NF-05-05, 2005, UTM 20N WGS84 (NCEI Accession 0131860)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a line shapefile showing the trackline of various Remotely Operated Vehicle (ROV) underwater video transects in the US Virgin Islands.NOAA's...

  17. NOAA Point Shapefile - ROV transects - Locations of underwater photos and/or video collected in the US Caribbean - south of Vieques and in and around the Grand Reserve northeast of Puerto Rico (2013)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This point shapefile denotes the location of underwater photos and/or video that were collected by NOAA scientists using a Spectrum Phantom S2 ROV (remotely operated...

  18. Embedded FIR filter design for real-time refocusing using a standard plenoptic video camera

    Science.gov (United States)

    Hahne, Christopher; Aggoun, Amar

    2014-03-01

    A novel and low-cost embedded hardware architecture for real-time refocusing based on a standard plenoptic camera is presented in this study. The proposed layout design synthesizes refocusing slices directly from micro images by omitting the process for the commonly used sub-aperture extraction. Therefore, intellectual property cores, containing switch controlled Finite Impulse Response (FIR) filters, are developed and applied to the Field Programmable Gate Array (FPGA) XC6SLX45 from Xilinx. Enabling the hardware design to work economically, the FIR filters are composed of stored product as well as upsampling and interpolation techniques in order to achieve an ideal relation between image resolution, delay time, power consumption and the demand of logic gates. The video output is transmitted via High-Definition Multimedia Interface (HDMI) with a resolution of 720p at a frame rate of 60 fps conforming to the HD ready standard. Examples of the synthesized refocusing slices are presented.

  19. Plant iodine-131 uptake in relation to root concentration as measured in minirhizotron by video camera:

    International Nuclear Information System (INIS)

    Moss, K.J.

    1990-09-01

    Glass viewing tubes (minirhizotrons) were placed in the soil beneath native perennial bunchgrass (Agropyron spicatum). The tubes provided access for observing and quantifying plant roots with a miniature video camera and soil moisture estimates by neutron hydroprobe. The radiotracer I-131 was delivered to the root zone at three depths with differing root concentrations. The plant was subsequently sampled and analyzed for I-131. Plant uptake was greater when I-131 was applied at soil depths with higher root concentrations. When I-131 was applied at soil depths with lower root concentrations, plant uptake was less. However, the relationship between root concentration and plant uptake was not a direct one. When I-131 was delivered to deeper soil depths with low root concentrations, the quantity of roots there appeared to be less effective in uptake than the same quantity of roots at shallow soil depths with high root concentration. 29 refs., 6 figs., 11 tabs

  20. Studying medical communication with video vignettes: a randomized study on how variations in video-vignette introduction format and camera focus influence analogue patients’ engagement

    Directory of Open Access Journals (Sweden)

    Leonie N. C. Visser

    2018-01-01

    Full Text Available Abstract Background Video vignettes are used to test the effects of physicians’ communication on patient outcomes. Methodological choices in video-vignette development may have far-stretching consequences for participants’ engagement with the video, and thus the ecological validity of this design. To supplement the scant evidence in this field, this study tested how variations in video-vignette introduction format and camera focus influence participants’ engagement with a video vignette showing a bad news consultation. Methods Introduction format (A = audiovisual vs. B = written and camera focus (1 = the physician only, 2 = the physician and the patient at neutral moments alternately, 3 = the physician and the patient at emotional moments alternately were varied in a randomized 2 × 3 between-subjects design. One hundred eighty-one students were randomly assigned to watch one of the six resulting video-vignette conditions as so-called analogue patients, i.e., they were instructed to imagine themselves being in the video patient’s situation. Four dimensions of self-reported engagement were assessed retrospectively. Emotional engagement was additionally measured by recording participants’ electrodermal and cardiovascular activity continuously while watching. Analyses of variance were used to test the effects of introduction format, camera focus and their interaction. Results The audiovisual introduction induced a stronger blood pressure response during watching the introduction (p = 0.048, η partial 2 $$ {\\eta}_{partial}^2 $$ = 0.05 and the consultation part of the vignette (p = 0.051, η partial 2 $$ {\\eta}_{partial}^2 $$ = 0.05, when compared to the written introduction. With respect to camera focus, results revealed that the variant focusing on the patient at emotional moments evoked a higher level of electrodermal activity (p = 0.003, η partial 2 $$ {\\eta}_{partial}^2 $$ = 0.06, when compared

  1. Autonomous video camera system for monitoring impacts to benthic habitats from demersal fishing gear, including longlines

    Science.gov (United States)

    Kilpatrick, Robert; Ewing, Graeme; Lamb, Tim; Welsford, Dirk; Constable, Andrew

    2011-04-01

    Studies of the interactions of demersal fishing gear with the benthic environment are needed in order to manage conservation of benthic habitats. There has been limited direct assessment of these interactions through deployment of cameras on commercial fishing gear especially on demersal longlines. A compact, autonomous deep-sea video system was designed and constructed by the Australian Antarctic Division (AAD) for deployment on commercial fishing gear to observe interactions with benthos in the Southern Ocean finfish fisheries (targeting toothfish, Dissostichus spp). The Benthic Impacts Camera System (BICS) is capable of withstanding depths to 2500 m, has been successfully fitted to both longline and demersal trawl fishing gear, and is suitable for routine deployment by non-experts such as fisheries observers or crew. The system is entirely autonomous, robust, compact, easy to operate, and has minimal effect on the performance of the fishing gear it is attached to. To date, the system has successfully captured footage that demonstrates the interactions between demersal fishing gear and the benthos during routine commercial operations. It provides the first footage demonstrating the nature of the interaction between demersal longlines and benthic habitats in the Southern Ocean, as well as showing potential as a tool for rapidly assessing habitat types and presence of mobile biota such as krill ( Euphausia superba).

  2. Three-camera setup to record simultaneously standardized high-definition video for smile analysis.

    Science.gov (United States)

    Husain, Akhter; Makhija, Parmanand G; Ummer, Aseena Alungal; Kuijpers-Jagtman, Anne Marie; Kuijpers, Mette A R

    2017-11-01

    Our objective was to develop a photographic setup that would simultaneously capture subjects' smiles from 3 views, both statically and dynamically, and develop a software to crop the produced video clip and slice the frames to study the smile at different stages. Facial images were made of 96 subjects, aged 18 to 28 years, in natural head position using a standardized setup of 3 digital single lens reflex cameras, with a reference sticker (10 × 10 mm) on the forehead of each subject. To test the reproducibility of the setup, 1 operator took 3 images of all subjects on the same day and on 3 different days in a subset of 26 subjects. For the same-day observations, correlation coefficients varied between 0.87 and 0.93. For the observations on 3 different days, correlation coefficients were also high. The duplicate measurement error and the mean difference between measurements were small and not significant, pointing to good reliability. This new technique to capture standardized high-definition video and still images simultaneously from 3 positions is a reliable and practical tool. The technique is easy to learn and implement in the orthodontic office. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  3. Two-Stage Classification Approach for Human Detection in Camera Video in Bulk Ports

    Directory of Open Access Journals (Sweden)

    Mi Chao

    2015-09-01

    Full Text Available With the development of automation in ports, the video surveillance systems with automated human detection begun to be applied in open-air handling operation areas for safety and security. The accuracy of traditional human detection based on the video camera is not high enough to meet the requirements of operation surveillance. One of the key reasons is that Histograms of Oriented Gradients (HOG features of the human body will show great different between front & back standing (F&B and side standing (Side human body. Therefore, the final training for classifier will only gain a few useful specific features which have contribution to classification and are insufficient to support effective classification, while using the HOG features directly extracted by the samples from different human postures. This paper proposes a two-stage classification method to improve the accuracy of human detection. In the first stage, during preprocessing classification, images is mainly divided into possible F&B human body and not F&B human body, and then they were put into the second-stage classification among side human and non-human recognition. The experimental results in Tianjin port show that the two-stage classifier can improve the classification accuracy of human detection obviously.

  4. Light-reflection random-target method for measurement of the modulation transfer function of a digital video-camera

    Czech Academy of Sciences Publication Activity Database

    Pospíšil, Jaroslav; Jakubík, P.; Machala, L.

    2005-01-01

    Roč. 116, - (2005), s. 573-585 ISSN 0030-4026 Institutional research plan: CEZ:AV0Z10100522 Keywords : random-target measuring method * light-reflection white - noise target * digital video camera * modulation transfer function * power spectral density Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.395, year: 2005

  5. Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video content analysis tasks in large-scale ad-hoc networks

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.

    2017-10-01

    Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.

  6. Diversity and composition of demersal fishes along a depth gradient assessed by baited remote underwater stereo-video.

    Directory of Open Access Journals (Sweden)

    Vincent Zintzen

    Full Text Available BACKGROUND: Continental slopes are among the steepest environmental gradients on earth. However, they still lack finer quantification and characterisation of their faunal diversity patterns for many parts of the world. METHODOLOGY/PRINCIPAL FINDINGS: Changes in fish community structure and diversity along a depth gradient from 50 to 1200 m were studied from replicated stereo baited remote underwater video deployments within each of seven depth zones at three locations in north-eastern New Zealand. Strong, but gradual turnover in the identities of species and community structure was observed with increasing depth. Species richness peaked in shallow depths, followed by a decrease beyond 100 m to a stable average value from 700 to 1200 m. Evenness increased to 700 m depth, followed by a decrease to 1200 m. Average taxonomic distinctness △(+ response was unimodal with a peak at 300 m. The variation in taxonomic distinctness Λ(+ first decreased sharply from 50 to 300 m, then increased beyond 500 m depth, indicating that species from deep samples belonged to more distant taxonomic groups than those from shallow samples. Fishes with northern distributions progressively decreased in their proportional representation with depth whereas those with widespread distributions increased. CONCLUSIONS/SIGNIFICANCE: This study provides the first characterization of diversity patterns for bait-attracted fish species on continental slopes in New Zealand and is an imperative primary step towards development of explanatory and predictive ecological models, as well as being fundamental for the implementation of efficient management and conservation strategies for fishery resources.

  7. An Alignment Method for the Integration of Underwater 3D Data Captured by a Stereovision System and an Acoustic Camera

    OpenAIRE

    Lagudi, Antonio; Bianco, Gianfranco; Muzzupappa, Maurizio; Bruno, Fabio

    2016-01-01

    The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration...

  8. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    Science.gov (United States)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  9. USING A DIGITAL VIDEO CAMERA AS THE SMART SENSOR OF THE SYSTEM FOR AUTOMATIC PROCESS CONTROL OF GRANULAR FODDER MOLDING

    Directory of Open Access Journals (Sweden)

    M. M. Blagoveshchenskaya

    2014-01-01

    Full Text Available Summary. The most important operation of granular mixed fodder production is molding process. Properties of granular mixed fodder are defined during this process. They determine the process of production and final product quality. The possibility of digital video camera usage as intellectual sensor for control system in process of production is analyzed in the article. The developed parametric model of the process of bundles molding from granular fodder mass is presented in the paper. Dynamic characteristics of the molding process were determined. A mathematical model of motion of bundle of granular fodder mass after matrix holes was developed. The developed mathematical model of the automatic control system (ACS with the use of etalon video frame as the set point in the MATLAB software environment was shown. As a parameter of the bundles molding process it is proposed to use the value of the specific area defined in the mathematical treatment of the video frame. The algorithms of the programs to determine the changes in structural and mechanical properties of the feed mass in video frames images were developed. Digital video shooting of various modes of the molding machine was carried out and after the mathematical processing of video the transfer functions for use as a change of adjustable parameters of the specific area were determined. Structural and functional diagrams of the system of regulation of the food bundles molding process with the use of digital camcorders were built and analyzed. Based on the solution of the equations of fluid dynamics mathematical model of bundle motion after leaving the hole matrix was obtained. In addition to its viscosity, creep property was considered that is characteristic of the feed mass. The mathematical model ACS of the bundles molding process allowing to investigate transient processes which occur in the control system that uses a digital video camera as the smart sensor was developed in Simulink

  10. Real-Time Video Transmission Over Different Underwater Wireless Optical Channels Using a Directly Modulated 520  nm Laser Diode

    KAUST Repository

    Al-Halafi, Abdullah

    2017-09-13

    We experimentally demonstrate high-quality real-time video streaming over an underwater wireless optical communication (UWOC) link up to 5 m distance using phase-shift keying (PSK) modulation and quadrature amplitude modulation (QAM) schemes. The communication system uses software defined platforms connected to a commercial TO-9 packaged pigtailed 520 nm directly modulated laser diode (LD) with 1.2 GHz bandwidth as the optical transmitter and an avalanche photodiode (APD) module as the receiver. To simulate various underwater channels, we perform laboratory experiments on clear, coastal, harbor I, and harbor II ocean water types. The measured bit error rates of the received video streams are 1.0×10−9 for QPSK, 4-QAM, and 8-QAM and 9.9×10−9 for 8-PSK. We further evaluate the quality of the received live video images using structural similarity and achieve values of about 0.9 for the first three water types, and about 0.7 for harbor II. To the best of our knowledge, these results present the highest quality video streaming ever achieved in UWOC systems that resemble communication channels in real ocean water environments.

  11. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  12. A lateral chromatic aberration correction system for ultrahigh-definition color video camera

    Science.gov (United States)

    Yamashita, Takayuki; Shimamoto, Hiroshi; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed color camera for an 8k x 4k-pixel ultrahigh-definition video system, which is called Super Hi- Vision, with a 5x zoom lens and a signal-processing system incorporating a function for real-time lateral chromatic aberration correction. The chromatic aberration of the lens degrades color image resolution. So in order to develop a compact zoom lens consistent with ultrahigh-resolution characteristics, we incorporated a real-time correction function in the signal-processing system. The signal-processing system has eight memory tables to store the correction data at eight focal length points on the blue and red channels. When the focal length data is inputted from the lens control units, the relevant correction data are interpolated from two of eights correction data tables. This system performs geometrical conversion on both channels using this correction data. This paper describes that the correction function can successfully reduce the lateral chromatic aberration, to an amount small enough to ensure the desired image resolution was achieved over the entire range of the lens in real time.

  13. New system for linear accelerator radiosurgery with a gantry-mounted video camera

    International Nuclear Information System (INIS)

    Kunieda, Etsuo; Kitamura, Masayuki; Kawaguchi, Osamu; Ohira, Takayuki; Ogawa, Kouichi; Ando, Yutaka; Nakamura, Kayoko; Kubo, Atsushi

    1998-01-01

    Purpose: We developed a positioning method that does not depend on the positioning mechanism originally annexed to the linac and investigated the positioning errors of the system. Methods and Materials: A small video camera was placed at a location optically identical to the linac x-ray source. A target pointer comprising a convex lens and bull's eye was attached to the arc of the Leksell stereotactic system so that the lens would form a virtual image of the bull's eye (virtual target) at the position of the center of the arc. The linac gantry and target pointer were placed at the side and top to adjust the arc center to the isocenter by referring the virtual target. Coincidence of the target and the isocenter could be confirmed in any combination of the couch and gantry rotation. In order to evaluate the accuracy of the positioning, a tungsten ball was attached to the stereotactic frame as a simulated target, which was repeatedly localized and repositioned to estimate the magnitude of the error. The center of the circular field defined by the collimator was marked on the film. Results: The differences between the marked centers of the circular field and the centers of the shadow of the simulated target were less than 0.3 mm

  14. Enumeration of Salmonids in the Okanogan Basin Using Underwater Video, Performance Period: October 2005 (Project Inception) - 31 December 2006.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Peter N.; Rayton, Michael D.; Nass, Bryan L.; Arterburn, John E.

    2007-06-01

    The Confederated Tribes of the Colville Reservation (Colville Tribes) identified the need for collecting baseline census data on the timing and abundance of adult salmonids in the Okanogan River Basin in order to determine basin and tributary-specific spawner distributions, evaluate the status and trends of natural salmonid production in the basin, document local fish populations, and augment existing fishery data. This report documents the design, installation, operation and evaluation of mainstem and tributary video systems in the Okanogan River Basin. The species-specific data collected by these fish enumeration systems are presented along with an evaluation of the operation of a facility that provides a count of fish using an automated method. Information collected by the Colville Tribes Fish & Wildlife Department, specifically the Okanogan Basin Monitoring and Evaluation Program (OBMEP), is intended to provide a relative abundance indicator for anadromous fish runs migrating past Zosel Dam and is not intended as an absolute census count. Okanogan Basin Monitoring and Evaluation Program collected fish passage data between October 2005 and December 2006. Video counting stations were deployed and data were collected at two locations in the basin: on the mainstem Okanogan River at Zosel Dam near Oroville, Washington, and on Bonaparte Creek, a tributary to the Okanogan River, in the town of Tonasket, Washington. Counts at Zosel Dam between 10 October 2005 and 28 February 2006 are considered partial, pilot year data as they were obtained from the operation of a single video array on the west bank fishway, and covered only a portion of the steelhead migration. A complete description of the apparatus and methodology can be found in 'Fish Enumeration Using Underwater Video Imagery - Operational Protocol' (Nass 2007). At Zosel Dam, totals of 57 and 481 adult Chinook salmon were observed with the video monitoring system in 2005 and 2006, respectively. Run

  15. A multiframe soft x-ray camera with fast video capture for the LSX field reversed configuration (FRC) experiment

    International Nuclear Information System (INIS)

    Crawford, E.A.

    1992-01-01

    Soft x-ray pinhole imaging has proven to be an exceptionally useful diagnostic for qualitative observation of impurity radiation from field reversed configuration plasmas. We used a four frame device, similar in design to those discussed in an earlier paper [E. A. Crawford, D. P. Taggart, and A. D. Bailey III, Rev. Sci. Instrum. 61, 2795 (1990)] as a routine diagnostic during the last six months of the Large s Experiment (LSX) program. Our camera is an improvement over earlier implementations in several significant aspects. It was designed and used from the onset of the LSX experiments with a video frame capture system so that an instant visual record of the shot was available to the machine operator as well as facilitating quantitative interpretation of intensity information recorded in the images. The camera was installed in the end region of the LSX on axis approximately 5.5 m from the plasma midplane. Experience with bolometers on LSX showed serious problems with ''particle dumps'' at the axial location at various times during the plasma discharge. Therefore, the initial implementation of the camera included an effective magnetic sweeper assembly. Overall performance of the camera, video capture system, and sweeper is discussed

  16. Video content analysis on body-worn cameras for retrospective investigation

    NARCIS (Netherlands)

    Bouma, H.; Baan, J.; Haar, F.B. ter; Eendebak, P.T.; Hollander, R.J.M. den; Burghouts, G.J.; Wijn, R.; Broek, S.P. van den; Rest, J.H.C. van

    2015-01-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications

  17. HDR 192Ir source speed measurements using a high speed video camera

    International Nuclear Information System (INIS)

    Fonseca, Gabriel P.; Viana, Rodrigo S. S.; Yoriyaz, Hélio; Podesta, Mark; Rubo, Rodrigo A.; Sales, Camila P. de; Reniers, Brigitte; Verhaegen, Frank

    2015-01-01

    Purpose: The dose delivered with a HDR 192 Ir afterloader can be separated into a dwell component, and a transit component resulting from the source movement. The transit component is directly dependent on the source speed profile and it is the goal of this study to measure accurate source speed profiles. Methods: A high speed video camera was used to record the movement of a 192 Ir source (Nucletron, an Elekta company, Stockholm, Sweden) for interdwell distances of 0.25–5 cm with dwell times of 0.1, 1, and 2 s. Transit dose distributions were calculated using a Monte Carlo code simulating the source movement. Results: The source stops at each dwell position oscillating around the desired position for a duration up to (0.026 ± 0.005) s. The source speed profile shows variations between 0 and 81 cm/s with average speed of ∼33 cm/s for most of the interdwell distances. The source stops for up to (0.005 ± 0.001) s at nonprogrammed positions in between two programmed dwell positions. The dwell time correction applied by the manufacturer compensates the transit dose between the dwell positions leading to a maximum overdose of 41 mGy for the considered cases and assuming an air-kerma strength of 48 000 U. The transit dose component is not uniformly distributed leading to over and underdoses, which is within 1.4% for commonly prescribed doses (3–10 Gy). Conclusions: The source maintains its speed even for the short interdwell distances. Dose variations due to the transit dose component are much lower than the prescribed treatment doses for brachytherapy, although transit dose component should be evaluated individually for clinical cases

  18. Robotic versus human camera holding in video-assisted thoracic sympathectomy: a single blind randomized trial of efficacy and safety.

    Science.gov (United States)

    Martins Rua, Joaquim Fernando; Jatene, Fabio Biscegli; de Campos, José Ribas Milanez; Monteiro, Rosangela; Tedde, Miguel Lia; Samano, Marcos Naoyuki; Bernardo, Wanderley M; Das-Neves-Pereira, João Carlos

    2009-02-01

    Our objective is to compare surgical safety and efficacy between robotic and human camera control in video-assisted thoracic sympathectomy. A randomized-controlled-trial was performed. Surgical operation was VATS sympathectomy for hyperhidrosis. The trial compared a voice-controlled robot for holding the endoscopic camera robotic group (Ro) to human assisted group (Hu). Each group included 19 patients. Sympathectomy was achieved by electrodessication of the third ganglion. Operations were filmed and images stored. Two observers quantified the number of involuntary and inappropriate movements and how many times the camera was cleaned. Safety criteria were surgical accidents, pain and aesthetical results; efficacy criteria were: surgical and camera use duration, anhydrosis, length of hospitalization, compensatory hyperhidrosis and patient satisfaction. There was no difference between groups regarding surgical accidents, number of involuntary movements, pain, aesthetical results, general satisfaction, number of lens cleaning, anhydrosis, length of hospitalization, and compensatory hyperhidrosis. The number of contacts of the laparoscopic lens with mediastinal structures was lower in the Ro group (Probotic arm in VATS sympathectomy for hyperhidrosis is as safe but less efficient when compared to a human camera-holding assistant.

  19. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    International Nuclear Information System (INIS)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-01-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination

  20. A small-scale comparison of Iceland scallop size distributions obtained from a camera based autonomous underwater vehicle and dredge survey.

    Directory of Open Access Journals (Sweden)

    Warsha Singh

    Full Text Available An approach is developed to estimate size of Iceland scallop shells from AUV photos. A small-scale camera based AUV survey of Iceland scallops was conducted at a defined site off West Iceland. Prior to height estimation of the identified shells, the distortions introduced by the vehicle orientation and the camera lens were corrected. The average AUV pitch and roll was 1.3 and 2.3 deg that resulted in <2% error in ground distance rendering these effects negligible. A quadratic polynomial model was identified for lens distortion correction. This model successfully predicted a theoretical grid from a frame photographed underwater, representing the inherent lens distortion. The predicted shell heights were scaled for the distance from the bottom at which the photos were taken. This approach was validated by height estimation of scallops of known sizes. An underestimation of approximately 0.5 cm was seen, which could be attributed to pixel error, where each pixel represented 0.24 x 0.27 cm. After correcting for this difference the estimated heights ranged from 3.8-9.3 cm. A comparison of the height-distribution from a small-scale dredge survey carried out in the vicinity showed non-overlapping peaks in size distribution, with scallops of a broader size range visible in the AUV survey. Further investigations are necessary to evaluate any underlying bias and to validate how representative these surveys are of the true population. The low resolution images made identification of smaller scallops difficult. Overall, the observations of very few small scallops in both surveys could be attributed to low recruitment levels in the recent years due to the known scallop parasite outbreak in the region.

  1. A small-scale comparison of Iceland scallop size distributions obtained from a camera based autonomous underwater vehicle and dredge survey.

    Science.gov (United States)

    Singh, Warsha; Örnólfsdóttir, Erla B; Stefansson, Gunnar

    2014-01-01

    An approach is developed to estimate size of Iceland scallop shells from AUV photos. A small-scale camera based AUV survey of Iceland scallops was conducted at a defined site off West Iceland. Prior to height estimation of the identified shells, the distortions introduced by the vehicle orientation and the camera lens were corrected. The average AUV pitch and roll was 1.3 and 2.3 deg that resulted in <2% error in ground distance rendering these effects negligible. A quadratic polynomial model was identified for lens distortion correction. This model successfully predicted a theoretical grid from a frame photographed underwater, representing the inherent lens distortion. The predicted shell heights were scaled for the distance from the bottom at which the photos were taken. This approach was validated by height estimation of scallops of known sizes. An underestimation of approximately 0.5 cm was seen, which could be attributed to pixel error, where each pixel represented 0.24 x 0.27 cm. After correcting for this difference the estimated heights ranged from 3.8-9.3 cm. A comparison of the height-distribution from a small-scale dredge survey carried out in the vicinity showed non-overlapping peaks in size distribution, with scallops of a broader size range visible in the AUV survey. Further investigations are necessary to evaluate any underlying bias and to validate how representative these surveys are of the true population. The low resolution images made identification of smaller scallops difficult. Overall, the observations of very few small scallops in both surveys could be attributed to low recruitment levels in the recent years due to the known scallop parasite outbreak in the region.

  2. Modeling 3D Unknown object by Range Finder and Video Camera ...

    African Journals Online (AJOL)

    Computer-generated and video images are superimposed. The man-machine interface functions deal mainly with on line building of graphic aids to improve perception, updating the geometric database of the robotic site, and video control of the robot. The superimposition of the real and virtual worlds is carried out through ...

  3. Lights, Camera, Action: Advancing Learning, Research, and Program Evaluation through Video Production in Educational Leadership Preparation

    Science.gov (United States)

    Friend, Jennifer; Militello, Matthew

    2015-01-01

    This article analyzes specific uses of digital video production in the field of educational leadership preparation, advancing a three-part framework that includes the use of video in (a) teaching and learning, (b) research methods, and (c) program evaluation and service to the profession. The first category within the framework examines videos…

  4. Measuring the Angular Velocity of a Propeller with Video Camera Using Electronic Rolling Shutter

    Directory of Open Access Journals (Sweden)

    Yipeng Zhao

    2018-01-01

    Full Text Available Noncontact measurement for rotational motion has advantages over the traditional method which measures rotational motion by means of installing some devices on the object, such as a rotary encoder. Cameras can be employed as remote monitoring or inspecting sensors to measure the angular velocity of a propeller because of their commonplace availability, simplicity, and potentially low cost. A defect of the measurement with cameras is to process the massive data generated by cameras. In order to reduce the collected data from the camera, a camera using ERS (electronic rolling shutter is applied to measure angular velocities which are higher than the speed of the camera. The effect of rolling shutter can induce geometric distortion in the image, when the propeller rotates during capturing an image. In order to reveal the relationship between the angular velocity and the image distortion, a rotation model has been established. The proposed method was applied to measure the angular velocities of the two-blade propeller and the multiblade propeller. The experimental results showed that this method could detect the angular velocities which were higher than the camera speed, and the accuracy was acceptable.

  5. Real-Time Range Sensing Video Camera for Human/Robot Interfacing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In comparison to stereovision, it is well known that structured-light illumination has distinct advantages including the use of only one camera, being significantly...

  6. Real-Time Range Sensing Video Camera for Human/Robot Interfacing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In comparison to stereovision, it is well known that structured-light illumination has distinct advantages including the use of only one camera, being significantly...

  7. Lights, camera, action…critique? Submit videos to AGU communications workshop

    Science.gov (United States)

    Viñas, Maria-José

    2011-08-01

    What does it take to create a science video that engages the audience and draws thousands of views on YouTube? Those interested in finding out should submit their research-related videos to AGU's Fall Meeting science film analysis workshop, led by oceanographer turned documentary director Randy Olson. Olson, writer-director of two films (Flock of Dodos: The Evolution-Intelligent Design Circus and Sizzle: A Global Warming Comedy) and author of the book Don't Be Such a Scientist: Talking Substance in an Age of Style, will provide constructive criticism on 10 selected video submissions, followed by moderated discussion with the audience. To submit your science video (5 minutes or shorter), post it on YouTube and send the link to the workshop coordinator, Maria-José Viñas (mjvinas@agu.org), with the following subject line: Video submission for Olson workshop. AGU will be accepting submissions from researchers and media officers of scientific institutions until 6:00 P.M. eastern time on Friday, 4 November. Those whose videos are selected to be screened will be notified by Friday, 18 November. All are welcome to attend the workshop at the Fall Meeting.

  8. Ground and aerial use of an infrared video camera with a mid-infrared filter (1.45 to 2.0 microns)

    Science.gov (United States)

    Everitt, J. H.; Escobar, D. E.; Nixon, P. R.; Hussey, M. A.; Blazquez, C. H.

    1986-01-01

    A black-and-white infrared (0.9 to 2.2 micron) video camera, filtered to record radiation within the 1.45 to 2.0 microns midinfrared water absorption region, was evaluated with ground and aerial studies. Imagery of single leaves of seven plant species (four succulent; three nonsucculent) showed that succulent leaves were easily distinguishable from nonsucculent leaves. Spectrophotometric leaf reflectance measurements made over the 1.45 to 2.0 microns confirmed the imagery results. Ground-based video recordings also showed that severely drought-stressed buffelgrass (Cenchrus ciliaris L.) plants were distinguishable from the nonstressed and moderately stressed plants. Moreover, the camera provided airborne imagery that clearly differentiated between irrigated and nonirrigated grass plots. Due to the lower radiation intensity in the mid-infrared spectral region and the low sensitivity response of the camera's tube, these video images were not as sharp as those obtained by visible or visible/near-infrared sensitive video cameras. Nevertheless, these results showed that a video camera with midinfrared sensitivity has potential for use in remote sensing research and applications.

  9. Using High Speed Smartphone Cameras and Video Analysis Techniques to Teach Mechanical Wave Physics

    Science.gov (United States)

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-01-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses…

  10. Use of a UAV-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Science.gov (United States)

    Interest in use of unmanned aerial vehicles in science has increased in recent years. It is predicted that they will be a preferred remote sensing platform for applications that inform sustainable rangeland management in the future. The objective of this study was to determine whether UAV video moni...

  11. Evaluation of a 0.9- to 2.2-microns sensitive video camera with a mid-infrared filter (1.45- to 2.0-microns)

    Science.gov (United States)

    Everitt, J. H.; Escobar, D. E.; Nixon, P. R.; Blazquez, C. H.; Hussey, M. A.

    The application of 0.9- to 2.2-microns sensitive black and white IR video cameras to remote sensing is examined. Field and laboratory recordings of the upper and lower surface of peperomia leaves, succulent prickly pear, and buffelgrass are evaluated; the reflectance, phytomass, green weight, and water content for the samples were measured. The data reveal that 0.9- to 2.2-microns video cameras are effective tools for laboratory and field research; however, the resolution and image quality of the data is poor compared to visible and near-IR images.

  12. Real-time multi-camera video acquisition and processing platform for ADAS

    Science.gov (United States)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  13. Forward rectification: spatial image normalization for a video from a forward facing vehicle camera

    Science.gov (United States)

    Prun, Viktor; Polevoy, Dmitri; Postnikov, Vassiliy

    2017-03-01

    The work in this paper is focused around visual ADAS (Advanced Driver Assistance Systems). We introduce forward rectification - a technique for making computer vision algorithms more robust against camera mount point and mount angles. Using the technique can increase the quality of recognition as well as lower the dimensionality for algorithm invariance, making it possible to apply simpler affine-invariant algorithms for applications that require projective invariance. Providing useful results this rectification requires thorough calibration of the camera, which can be done automatically or semi-automatically. The technique is of general nature and can be applied to different algorithms, such as pattern matching detectors, convolutional neural networks. The applicability of the technique is demonstrated on HOG-based car detector detection rate.

  14. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices.

    Science.gov (United States)

    Zoletnik, S; Biedermann, C; Cseh, G; Kocsis, G; König, R; Szabolics, T; Szepesi, T

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  15. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    Science.gov (United States)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  16. A Refrigerated Web Camera for Photogrammetric Video Measurement inside Biomass Boilers and Combustion Analysis

    Directory of Open Access Journals (Sweden)

    Enrique Granada

    2011-01-01

    Full Text Available This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes.

  17. A refrigerated web camera for photogrammetric video measurement inside biomass boilers and combustion analysis.

    Science.gov (United States)

    Porteiro, Jacobo; Riveiro, Belén; Granada, Enrique; Armesto, Julia; Eguía, Pablo; Collazo, Joaquín

    2011-01-01

    This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD) web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes.

  18. Using high speed smartphone cameras and video analysis techniques to teach mechanical wave physics

    Science.gov (United States)

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-07-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses along a spring and the period of transverse standing waves generated in the same spring. These experiments can be helpful in addressing several relevant concepts about the physics of mechanical waves and in overcoming some of the typical student misconceptions in this same field.

  19. Social interactions of juvenile brown boobies at sea as observed with animal-borne video cameras.

    Directory of Open Access Journals (Sweden)

    Ken Yoda

    Full Text Available While social interactions play a crucial role on the development of young individuals, those of highly mobile juvenile birds in inaccessible environments are difficult to observe. In this study, we deployed miniaturised video recorders on juvenile brown boobies Sula leucogaster, which had been hand-fed beginning a few days after hatching, to examine how social interactions between tagged juveniles and other birds affected their flight and foraging behaviour. Juveniles flew longer with congeners, especially with adult birds, than solitarily. In addition, approximately 40% of foraging occurred close to aggregations of congeners and other species. Young seabirds voluntarily followed other birds, which may directly enhance their foraging success and improve foraging and flying skills during their developmental stage, or both.

  20. A simple, inexpensive video camera setup for the study of avian nest activity

    Science.gov (United States)

    Sabine, J.B.; Meyers, J.M.; Schweitzer, Sara H.

    2005-01-01

    Time-lapse video photography has become a valuable tool for collecting data on avian nest activity and depredation; however, commercially available systems are expensive (>USA $4000/unit). We designed an inexpensive system to identify causes of nest failure of American Oystercatchers (Haematopus palliatus) and assessed its utility at Cumberland Island National Seashore, Georgia. We successfully identified raccoon (Procyon lotor), bobcat (Lynx rufus), American Crow (Corvus brachyrhynchos), and ghost crab (Ocypode quadrata) predation on oystercatcher nests. Other detected causes of nest failure included tidal overwash, horse trampling, abandonment, and human destruction. System failure rates were comparable with commercially available units. Our system's efficacy and low cost (<$800) provided useful data for the management and conservation of the American Oystercatcher.

  1. Jellyfish support high energy intake of leatherback sea turtles (Dermochelys coriacea: video evidence from animal-borne cameras.

    Directory of Open Access Journals (Sweden)

    Susan G Heaslip

    Full Text Available The endangered leatherback turtle is a large, highly migratory marine predator that inexplicably relies upon a diet of low-energy gelatinous zooplankton. The location of these prey may be predictable at large oceanographic scales, given that leatherback turtles perform long distance migrations (1000s of km from nesting beaches to high latitude foraging grounds. However, little is known about the profitability of this migration and foraging strategy. We used GPS location data and video from animal-borne cameras to examine how prey characteristics (i.e., prey size, prey type, prey encounter rate correlate with the daytime foraging behavior of leatherbacks (n = 19 in shelf waters off Cape Breton Island, NS, Canada, during August and September. Video was recorded continuously, averaged 1:53 h per turtle (range 0:08-3:38 h, and documented a total of 601 prey captures. Lion's mane jellyfish (Cyanea capillata was the dominant prey (83-100%, but moon jellyfish (Aurelia aurita were also consumed. Turtles approached and attacked most jellyfish within the camera's field of view and appeared to consume prey completely. There was no significant relationship between encounter rate and dive duration (p = 0.74, linear mixed-effects models. Handling time increased with prey size regardless of prey species (p = 0.0001. Estimates of energy intake averaged 66,018 kJ • d(-1 but were as high as 167,797 kJ • d(-1 corresponding to turtles consuming an average of 330 kg wet mass • d(-1 (up to 840 kg • d(-1 or approximately 261 (up to 664 jellyfish • d(-1. Assuming our turtles averaged 455 kg body mass, they consumed an average of 73% of their body mass • d(-1 equating to an average energy intake of 3-7 times their daily metabolic requirements, depending on estimates used. This study provides evidence that feeding tactics used by leatherbacks in Atlantic Canadian waters are highly profitable and our results are consistent with estimates of mass gain prior to

  2. Localization of viewpoint of a video camera in a partially modeled universe

    International Nuclear Information System (INIS)

    Awanzino, C.

    2000-01-01

    Interventions in reprocessing cells in nuclear plants are performed by tele-operated robots. These reprocessing cells are essentially constituted of repetitive structures of similar pipes. In addition, the pipes in the cell are metallic. Thus, the pipe illumination by a light source brings areas of high light intensity, called highlights. Highlights often cause image processing failures, which lead to image misinterpretation. Thus, it is very difficult for the operator to steer itself. Our work aims at providing a system able to localize the robot inside the cell at any time to help the operator. A database of the cell is provided, but this database may be incomplete or unprecise. At first, we proposed a polarization based system, which exploits highlights to extract the axes of the pipes, by discriminating the scene from the background. But, when highlights are missing, the process may fail. Then, in a second part, we proposed a localization method using a correlation based assignment process. The robot localization is performed by minimizing a double criteria. The first part of this criteria translates into a good projection of the textured model in the image. The second one translates into the fact that the system composed of the scene and two successive images have to satisfy the epi-polar constraint. The minimization criteria is symmetric in relation to time in order to not perturb the localization process by previous localization errors. Indeed, the method calls into question the previous localization, in relation to the new image, to localize at best the new camera attitude. In order to validate the method, some experiments have been presented, but more general ones have to be performed. (author) [fr

  3. Head-camera video recordings of trauma core competency procedures can evaluate surgical resident's technical performance as well as colocated evaluators.

    Science.gov (United States)

    Mackenzie, Colin F; Pasley, Jason; Garofalo, Evan; Shackelford, Stacy; Chen, Hegang; Longinaker, Nyaradzo; Granite, Guinevere; Pugh, Kristy; Hagegeorge, George; Tisherman, Samuel A

    2017-07-01

    Unbiased evaluation of trauma core competency procedures is necessary to determine if residency and predeployment training courses are useful. We tested whether a previously validated individual procedure score (IPS) for individual procedure vascular exposure and fasciotomy (FAS) performance skills could discriminate training status by comparing IPS of evaluators colocated with surgeons to blind video evaluations. Performance of axillary artery (AA), brachial artery (BA), and femoral artery (FA) vascular exposures and lower extremity FAS on fresh cadavers by 40 PGY-2 to PGY-6 residents was video-recorded from head-mounted cameras. Two colocated trained evaluators assessed IPS before and after training. One surgeon in each pretraining tertile of IPS for each procedure was randomly identified for blind video review. The same 12 surgeons were video-recorded repeating the procedures less than 4 weeks after training. Five evaluators independently reviewed all 96 randomly arranged deidentified videos. Inter-rater reliability/consistency, intraclass correlation coefficients were compared by colocated versus video review of IPS, and errors. Study methodology and bias were judged by Medical Education Research Study Quality Instrument and the Quality Assessment of Diagnostic Accuracy Studies criteria. There were no differences (p ≥ 0.5) in IPS for AA, FA, FAS, whether evaluators were colocated or reviewed video recordings. Evaluator consistency was 0.29 (BA) - 0.77 (FA). Video and colocated evaluators were in total agreement (p = 1.0) for error recognition. Intraclass correlation coefficient was 0.73 to 0.92, dependent on procedure. Correlations video versus colocated evaluations were 0.5 to 0.9. Except for BA, blinded video evaluators discriminated (p Education Research Study Quality Instrument criteria scored 15.5/19, Quality Assessment of Diagnostic Accuracy Studies 2 showed low bias risk. Video evaluations of AA, FA, and FAS procedures with IPS are unbiased, valid, and

  4. Observation of the dynamic movement of fragmentations by high-speed camera and high-speed video

    Science.gov (United States)

    Suk, Chul-Gi; Ogata, Yuji; Wada, Yuji; Katsuyama, Kunihisa

    1995-05-01

    The experiments of blastings using mortal concrete blocks and model concrete columns were carried out in order to obtain technical information on fragmentation caused by the blasting demolition. The dimensions of mortal concrete blocks were 1,000 X 1,000 X 1,000 mm. Six kinds of experimental blastings were carried out using mortal concrete blocks. In these experiments precision detonators and No. 6 electric detonators with 10 cm detonating fuse were used and discussed the control of fragmentation. As the results of experiment it was clear that the flying distance of fragmentation can be controlled using a precise blasting system. The reinforced concrete model columns for typical apartment houses in Japan were applied to the experiments. The dimension of concrete test column was 800 X 800 X 2400 mm and buried 400 mm in the ground. The specified design strength of the concrete was 210 kgf/cm2. These columns were exploded by the blasting with internal loading of dynamite. The fragmentation were observed by two kinds of high speed camera with 500 and 2000 FPS and a high speed video with 400 FPS. As one of the results in the experiments, the velocity of fragmentation, blasted 330 g of explosive with the minimum resisting length of 0.32 m, was measured as much as about 40 m/s.

  5. Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video-content-analysis tasks in large-scale ad-hoc networks

    NARCIS (Netherlands)

    Hollander R.J.M. den; Bouma, H.; Rest, J.H.C. van; Hove, J.M. ten; Haar, F.B. ter; Burghouts, G.J.

    2017-01-01

    Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS

  6. An Underwater Color Image Quality Evaluation Metric.

    Science.gov (United States)

    Yang, Miao; Sowmya, Arcot

    2015-12-01

    Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score.

  7. Underwater robots

    CERN Document Server

    Antonelli, Gianluca

    2014-01-01

    This book, now at the third edition, addresses the main control aspects in underwater manipulation tasks. The mathematical model with significant impact on the control strategy is discussed. The problem of controlling a 6-degrees-of-freedoms autonomous underwater vehicle is deeply investigated and a survey of fault detection/tolerant strategies for unmanned underwater vehicles is provided. Inverse kinematics, dynamic and interaction control for underwater vehicle-manipulator systems are then discussed. The code used to generate most of the numerical simulations is made available and briefly discussed.       

  8. The Modular Optical Underwater Survey System

    Directory of Open Access Journals (Sweden)

    Ruhul Amin

    2017-10-01

    Full Text Available The Pacific Islands Fisheries Science Center deploys the Modular Optical Underwater Survey System (MOUSS to estimate the species-specific, size-structured abundance of commercially-important fish species in Hawaii and the Pacific Islands. The MOUSS is an autonomous stereo-video camera system designed for the in situ visual sampling of fish assemblages. This system is rated to 500 m and its low-light, stereo-video cameras enable identification, counting, and sizing of individuals at a range of 0.5–10 m. The modular nature of MOUSS allows for the efficient and cost-effective use of various imaging sensors, power systems, and deployment platforms. The MOUSS is in use for surveys in Hawaii, the Gulf of Mexico, and Southern California. In Hawaiian waters, the system can effectively identify individuals to a depth of 250 m using only ambient light. In this paper, we describe the MOUSS’s application in fisheries research, including the design, calibration, analysis techniques, and deployment mechanism.

  9. Complementarity of rotating video and underwater visual census for assessing species richness, frequency and density of reef fish on coral reef slopes.

    Directory of Open Access Journals (Sweden)

    Delphine Mallet

    Full Text Available Estimating diversity and abundance of fish species is fundamental for understanding community structure and dynamics of coral reefs. When designing a sampling protocol, one crucial step is the choice of the most suitable sampling technique which is a compromise between the questions addressed, the available means and the precision required. The objective of this study is to compare the ability to sample reef fish communities at the same locations using two techniques based on the same stationary point count method: one using Underwater Visual Census (UVC and the other rotating video (STAVIRO. UVC and STAVIRO observations were carried out on the exact same 26 points on the reef slope of an intermediate reef and the associated inner barrier reefs. STAVIRO systems were always deployed 30 min to 1 hour after UVC and set exactly at the same place. Our study shows that; (i fish community observations by UVC and STAVIRO differed significantly; (ii species richness and density of large species were not significantly different between techniques; (iii species richness and density of small species were higher for UVC; (iv density of fished species was higher for STAVIRO and (v only UVC detected significant differences in fish assemblage structure across reef type at the spatial scale studied. We recommend that the two techniques should be used in a complementary way to survey a large area within a short period of time. UVC may census reef fish within complex habitats or in very shallow areas such as reef flat whereas STAVIRO would enable carrying out a large number of stations focused on large and diver-averse species, particularly in the areas not covered by UVC due to time and depth constraints. This methodology would considerably increase the spatial coverage and replication level of fish monitoring surveys.

  10. Google™ underwater

    Science.gov (United States)

    Showstack, Randy

    2012-10-01

    The first underwater panoramic images were added to Google Maps™, the company announced on 25 September. This first “underwater Street View collection,” launched in partnership with the Caitlin Seaview Survey, provides people with the opportunity to “become the next virtual Jacques Cousteau.” For more information, see: maps.google.com/ocean.

  11. Underwater laser detection system

    Science.gov (United States)

    Gomaa, Walid; El-Sherif, Ashraf F.; El-Sharkawy, Yasser H.

    2015-02-01

    The conventional method used to detect an underwater target is by sending and receiving some form of acoustic energy. But the acoustic systems have limitations in the range resolution and accuracy; while, the potential benefits of a laserbased underwater target detection include high directionality, high response, and high range accuracy. Lasers operating in the blue-green region of the light spectrum(420 : 570nm)have a several applications in the area of detection and ranging of submersible targets due to minimum attenuation through water ( less than 0.1 m-1) and maximum laser reflection from estimated target (like mines or submarines) to provide a long range of detection. In this paper laser attenuation in water was measured experimentally by new simple method by using high resolution spectrometer. The laser echoes from different targets (metal, plastic, wood, and rubber) were detected using high resolution CCD camera; the position of detection camera was optimized to provide a high reflection laser from target and low backscattering noise from the water medium, digital image processing techniques were applied to detect and discriminate the echoes from the metal target and subtract the echoes from other objects. Extraction the image of target from the scattering noise is done by background subtraction and edge detection techniques. As a conclusion, we present a high response laser imaging system to detect and discriminate small size, like-mine underwater targets.

  12. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  13. Underwater Vehicle

    National Research Council Canada - National Science Library

    Dick, James L

    2007-01-01

    There is thus provided an underwater vehicle having facility for maneuvering alongside a retrieving vehicle, as by manipulation of bow and stern planes, for engaging a hull surface of the retrieving...

  14. Initial evaluation of prospective cardiac triggering using photoplethysmography signals recorded with a video camera compared to pulse oximetry and electrocardiography at 7T MRI.

    Science.gov (United States)

    Spicher, Nicolai; Kukuk, Markus; Maderwald, Stefan; Ladd, Mark E

    2016-11-24

    Accurate synchronization between magnetic resonance imaging data acquisition and a subject's cardiac activity ("triggering") is essential for reducing image artifacts but conventional, contact-based methods for this task are limited by several factors, including preparation time, patient inconvenience, and susceptibility to signal degradation. The purpose of this work is to evaluate the performance of a new contact-free triggering method developed with the aim to eventually replace conventional methods in non-cardiac imaging applications. In this study, the method's performance is evaluated in the context of 7 Tesla non-enhanced angiography of the lower extremities. Our main contribution is a basic algorithm capable of estimating in real-time the phase of the cardiac cycle from reflection photoplethysmography signals obtained from skin color variations of the forehead recorded with a video camera. Instead of finding the algorithm's parameters heuristically, they were optimized using videos of the forehead as well as electrocardiography and pulse oximetry signals that were recorded from eight healthy volunteers in and outside the scanner, with and without active radio frequency and gradient coils. Based on the video characteristics, synthetic signals were generated and the "best available" values of an objective function were determined using mathematical optimization. The performance of the proposed method with optimized algorithm parameters was evaluated by applying it to the recorded videos and comparing the computed triggers to those of contact-based methods. Additionally, the method was evaluated by using its triggers for acquiring images from a healthy volunteer and comparing the result to images obtained using pulse oximetry triggering. During evaluation of the videos recorded inside the bore with active radio frequency and gradient coils, the pulse oximeter triggers were labeled in 62.5% as "potentially usable" for cardiac triggering, the electrocardiography

  15. Autonomous underwater handling system for service, measurement and cutting tasks for the decommissioning of nuclear facilities

    International Nuclear Information System (INIS)

    Hahn, M.; Haferkamp, H.; Bach, W.; Rose, N.

    1992-01-01

    For about 10 years the Institute for Material Science at the Hanover University has worked on projects of underwater cutting and welding. Increasing tasks to be done in nuclear facilities led to the development of special handling systems to support and handle the cutting tools. Also sensors and computers for extensive and complex tasks were integrated. A small sized freediving handling system, equipped with 2 video cameras, ultrasonic and radiation sensors and a plasma cutting torch for inspection and decommissioning tasks in nuclear facilities is described in this paper. (Author)

  16. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    Science.gov (United States)

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik

    2017-03-01

    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  17. What Does the Camera Communicate? An Inquiry into the Politics and Possibilities of Video Research on Learning

    Science.gov (United States)

    Vossoughi, Shirin; Escudé, Meg

    2016-01-01

    This piece explores the politics and possibilities of video research on learning in educational settings. The authors (a research-practice team) argue that changing the stance of inquiry from "surveillance" to "relationship" is an ongoing and contingent practice that involves pedagogical, political, and ethical choices on the…

  18. Changes are detected - cameras and video systems are monitoring the plant site, only rarely giving false alarm

    International Nuclear Information System (INIS)

    Zeissler, H.

    1988-01-01

    The main purpose of automatic data acquisition and processing for monitoring goals is to relieve the security personnel from monotonous observation tasks. The novel video systems can be programmed to detect moving target alarm signals, or accept alarm-suppressing image changes. This allows an intelligent alarm evaluation for physical protection in industry, differentiating between real and false alarm signals. (orig.) [de

  19. The Effect of Smartphone Video Camera as a Tool to Create Gigital Stories for English Learning Purposes

    Science.gov (United States)

    Gromik, Nicolas A.

    2015-01-01

    The integration of smartphones in the language learning environment is gaining research interest. However, using a smartphone to learn to speak spontaneously has received little attention. The emergence of smartphone technology and its video recording feature are recognised as suitable learning tools. This paper reports on a case study conducted…

  20. Use of an unmanned aerial vehicle-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Science.gov (United States)

    We determined the feasibility of using unmanned aerial vehicle (UAV) video monitoring to predict intake of discrete food items of rangeland-raised Raramuri Criollo non-nursing beef cows. Thirty-five cows were released into a 405-m2 rectangular dry lot, either in pairs (pilot tests) or individually (...

  1. Video digitizer (real time-frame grabber) with region of interest suitable for quantitative data analysis used on the infrared and H alpha cameras installed on the DIII-D experiment

    International Nuclear Information System (INIS)

    Ferguson, S.W.; Kevan, D.K.; Hill, D.N.; Allen, S.L.

    1987-01-01

    This paper describes a CAMAC based video digitizer with region of interest (ROI) capability that was designed for use with the infrared and H alpha cameras installed by Lawrence Livermore Laboratory on the DIII-D experiment at G.A. Technologies in San Diego, California. The video digitizer uses a custom built CAMAC video synchronizer module to clock data into a CAMAC transient recorder on a line-by-line basis starting at the beginning of a field. The number of fields that are recorded is limited only by the available transient recorder memory. In order to conserve memory, the CAMAC video synchronizer module provides for the alternative selection of a specific region of interest in each successive field to be recorded. Memory conservation can be optimized by specifying lines in the field, start time, stop time, and the number of data samples per line. This video frame grabber has proved versatile for capturing video in such diverse applications as recording video fields from a video tape recorder played in slow motion or recording video fields in real time during a DIII-D shot. In other cases, one or more lines of video are recorded per frame to give a cross sectional slice of the plasma. Since all the data in the digitizer memory is synchronized to video fields and lines, the data can be read directly into the control computer in the proper matrix format to facilitate rapid processing, display, and permanent storage

  2. Insights into the Underwater Diving, Feeding, and Calling Behavior of Blue Whales from a Suction-Cup-Attached Video-Imaging Tag (CRITTERCAM)

    Science.gov (United States)

    2008-01-01

    San Diego Diane Gendron Centro Interdisciplinario de Ciencias Marinas Kelly Robertson Southwest Fisheries Science Center- NMFS/NOAA P A P E R Insights...archival tags have begun to provide more details about underwater behaviors, in- cluding feeding and social behaviors (Goldbogen et al., 2006; Oleson...solitary traveling males while intermittent callers are sometimes associated with other whales (Oleson et al., 2007a). While the social interactions of

  3. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  4. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  5. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  6. Surgeon-Manipulated Live Surgery Video Recording Apparatuses: Personal Experience and Review of Literature.

    Science.gov (United States)

    Kapi, Emin

    2017-06-01

    Visual recording of surgical procedures is a method that is used quite frequently in practices of plastic surgery. While presentations containing photographs are quite common in education seminars and congresses, video-containing presentations find more favour. For this reason, the presentation of surgical procedures in the form of real-time video display has increased especially recently. Appropriate technical equipment for video recording is not available in most hospitals, so there is a need to set up external apparatus in the operating room. Among these apparatuses can be listed such options as head-mounted video cameras, chest-mounted cameras, and tripod-mountable cameras. The head-mounted video camera is an apparatus that is capable of capturing high-resolution and detailed close-up footage. The tripod-mountable camera enables video capturing from a fixed point. Certain user-specific modifications can be made to overcome some of these restrictions. Among these modifications, custom-made applications are one of the most effective solutions. The article makes an attempt to present the features and experiences concerning the use of a combination of a head- or chest-mounted action camera, a custom-made portable tripod apparatus of versatile features, and an underwater camera. The descriptions we used are quite easy-to-assembly, quickly installed, and inexpensive apparatuses that do not require specific technical knowledge and can be manipulated by the surgeon personally in all procedures. The author believes that video recording apparatuses will be integrated more to the operating room, become a standard practice, and become more enabling for self-manipulation by the surgeon in the near future. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  7. Automated Video Quality Assessment for Deep-Sea Video

    Science.gov (United States)

    Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.

    2015-12-01

    Video provides a rich source of data for geophysical analysis, often supplying detailed information about the environment when other instruments may not. This is especially true of deep-sea environments, where direct visual observations cannot be made. As computer vision techniques improve and volumes of video data increase, automated video analysis is emerging as a practical alternative to labor-intensive manual analysis. Automated techniques can be much more sensitive to video quality than their manual counterparts, so performing quality assessment before doing full analysis is critical to producing valid results.Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories that supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea, including video and still cameras. This network of ocean observatories has produced almost 20,000 hours of video (about 38 hours are recorded each day) and an additional 8,000 hours of logs from remotely operated vehicle (ROV) dives. We begin by surveying some ways in which deep-sea video poses challenges for automated analysis, including: 1. Non-uniform lighting: Single, directional, light sources produce uneven luminance distributions and shadows; remotely operated lighting equipment are also susceptible to technical failures. 2. Particulate noise: Turbidity and marine snow are often present in underwater video; particles in the water column can have sharper focus and higher contrast than the objects of interest due to their proximity to the light source and can also influence the camera's autofocus and auto white-balance routines. 3. Color distortion (low contrast): The rate of absorption of light in water varies by wavelength, and is higher overall than in air, altering apparent colors and lowering the contrast of objects at a distance.We also describe measures under development at ONC for detecting and mitigating

  8. Body worn camera

    Science.gov (United States)

    Aishwariya, A.; Pallavi Sudhir, Gulavani; Garg, Nemesa; Karthikeyan, B.

    2017-11-01

    A body worn camera is small video camera worn on the body, typically used by police officers to record arrests, evidence from crime scenes. It helps preventing and resolving complaints brought by members of the public; and strengthening police transparency, performance, and accountability. The main constants of this type of the system are video format, resolution, frames rate, and audio quality. This system records the video in .mp4 format with 1080p resolution and 30 frames per second. One more important aspect to while designing this system is amount of power the system requires as battery management becomes very critical. The main design challenges are Size of the Video, Audio for the video. Combining both audio and video and saving it in .mp4 format, Battery, size that is required for 8 hours of continuous recording, Security. For prototyping this system is implemented using Raspberry Pi model B.

  9. Ladder beam and camera video recording system for evaluating forelimb and hindlimb deficits after sensorimotor cortex injury in rats.

    Science.gov (United States)

    Soblosky, J S; Colgin, L L; Chorney-Lane, D; Davidson, J F; Carey, M E

    1997-12-30

    Hindlimb and forelimb deficits in rats caused by sensorimotor cortex lesions are frequently tested by using the narrow flat beam (hindlimb), the narrow pegged beam (hindlimb and forelimb) or the grid-walking (forelimb) tests. Although these are excellent tests, the narrow flat beam generates non-parametric data so that using more powerful parametric statistical analyses are prohibited. All these tests can be difficult to score if the rat is moving rapidly. Foot misplacements, especially on the grid-walking test, are indicative of an ongoing deficit, but have not been reliably and accurately described and quantified previously. In this paper we present an easy to construct and use horizontal ladder-beam with a camera system on rails which can be used to evaluate both hindlimb and forelimb deficits in a single test. By slow motion videotape playback we were able to quantify and demonstrate foot misplacements which go beyond the recovery period usually seen using more conventional measures (i.e. footslips and footfaults). This convenient system provides a rapid and reliable method for recording and evaluating rat performance on any type of beam and may be useful for measuring sensorimotor recovery following brain injury.

  10. Underwater manipulator

    Science.gov (United States)

    Schrum, P.B.; Cohen, G.H.

    1993-04-20

    Self-contained, waterproof, water-submersible, remote-controlled apparatus is described for manipulating a device, such as an ultrasonic transducer for measuring crack propagation on an underwater specimen undergoing shock testing. The subject manipulator includes metal bellows for transmittal of angular motions without the use of rotating shaft seals or O-rings. Inside the manipulator, a first stepper motor controls angular movement. In the preferred embodiment, the bellows permit the first stepper motor to move an ultrasonic transducer [plus minus]45 degrees in a first plane and a second bellows permit a second stepper motor to move the transducer [plus minus]10 degrees in a second plane orthogonal to the first. In addition, an XY motor-driven table provides XY motion.

  11. Underwater manipulator

    International Nuclear Information System (INIS)

    Schrum, P.B.; Cohen, G.H.

    1993-01-01

    Self-contained, waterproof, water-submersible, remote-controlled apparatus is described for manipulating a device, such as an ultrasonic transducer for measuring crack propagation on an underwater specimen undergoing shock testing. The subject manipulator includes metal bellows for transmittal of angular motions without the use of rotating shaft seals or O-rings. Inside the manipulator, a first stepper motor controls angular movement. In the preferred embodiment, the bellows permit the first stepper motor to move an ultrasonic transducer ±45 degrees in a first plane and a second bellows permit a second stepper motor to move the transducer ±10 degrees in a second plane orthogonal to the first. In addition, an XY motor-driven table provides XY motion

  12. KW basin backwash pit sludge measurement/video

    International Nuclear Information System (INIS)

    Dodd, E.N. Jr.

    1994-01-01

    The purpose of this procedure is to gather visual and depth information and monitor underwater activities in the 105-KW SFBWP and transfer channel. Profile lighting (the use of lighting and shadows to show the surface contour) will be used to assess the contour of the sludge surface. Select measurements will also be taken to determine the actual sludge depth. The control/video station will be setup outside the radiation area or in lowest possible exposure area to reduce personnel exposure (ALARA). This procedure is to provide a mechanism to assist in fully characterizing the volume and surface topology of the sludge currently deposited in the sandfilter backwash pit (SFBWP). Surveillance Systems Engineering (SSE) personnel will gather visual information utilizing a closed circuit television (CCTV) color camera, mounted to stainless steel extension poles. Connections allow the camera to be connected with a pan and tilt to allow better positioning capabilities and to get good landscape profiling of the sediment surface. The information will be videotaped to a one-half inch NTSC or Y/C format. Underwater lighting will be accomplished by means of 500 watt underwater lamps

  13. A COMPARISON BETWEEN ACTIVE AND PASSIVE TECHNIQUES FOR UNDERWATER 3D APPLICATIONS

    Directory of Open Access Journals (Sweden)

    G. Bianco

    2012-09-01

    Full Text Available In the field of 3D scanning, there is an increasing need for more accurate technologies to acquire 3D models of close range objects. Underwater exploration, for example, is very hard to perform due to the hostile conditions and the bad visibility of the environment. Some application fields, like underwater archaeology, require to recover tridimensional data of objects that cannot be moved from their site or touched in order to avoid possible damages. Photogrammetry is widely used for underwater 3D acquisition, because it requires just one or two digital still or video cameras to acquire a sequence of images taken from different viewpoints. Stereo systems composed by a pair of cameras are often employed on underwater robots (i.e. ROVs, Remotely Operated Vehicles and used by scuba divers, in order to survey archaeological sites, reconstruct complex 3D structures in aquatic environment, estimate in situ the length of marine organisms, etc. The stereo 3D reconstruction is based on the triangulation of corresponding points on the two views. This requires to find in both images common points and to match them (correspondence problem, determining a plane that contains the 3D point on the object. Another 3D technique, frequently used in air acquisition, solves this point-matching problem by projecting structured lighting patterns to codify the acquired scene. The corresponding points are identified associating a binary code in both images. In this work we have tested and compared two whole-field 3D imaging techniques (active and passive based on stereo vision, in underwater environment. A 3D system has been designed, composed by a digital projector and two still cameras mounted in waterproof housing, so that it can perform the various acquisitions without changing the configuration of optical devices. The tests were conducted in a water tank in different turbidity conditions, on objects with different surface properties. In order to simulate a typical

  14. Underwater Sound Reference Division

    Data.gov (United States)

    Federal Laboratory Consortium — The Underwater Sound Reference Division (USRD) serves as the U.S. standardizing activity in the area of underwater acoustic measurements, as the National Institute...

  15. Developing participatory research in radiology: the use of a graffiti wall, cameras and a video box in a Scottish radiology department

    International Nuclear Information System (INIS)

    Mathers, Sandra A.; Anderson, Helen; McDonald, Sheila; Chesson, Rosemary A.

    2010-01-01

    Participatory research is increasingly advocated for use in health and health services research and has been defined as a 'process of producing new knowledge by systematic enquiry, with the collaboration of those being studied'. The underlying philosophy of participatory research is that those recruited to studies are acknowledged as experts who are 'empowered to truly participate and have their voices heard'. Research methods should enable children to express themselves. This has led to the development of creative approaches of working with children that offer alternatives to, for instance, the structured questioning of children by researchers either through questionnaires or interviews. To examine the feasibility and potential of developing participatory methods in imaging research. We employed three innovative methods of data collection sequentially, namely the provision of: 1) a graffiti wall; 2) cameras, and 3) a video box for children's use. While the graffiti wall was open to all who attended the department, for the other two methods children were allocated to each 'arm' consecutively until our target of 20 children for each was met. The study demonstrated that it was feasible to use all three methods of data collection within the context of a busy radiology department. We encountered no complaints from staff, patients or parents. Children were willing to participate but we did not collect data to establish if they enjoyed the activities, were pleased to have the opportunity to make comments or whether anxieties about their treatment inhibited their participation. The data yield was disappointing. In particular, children's contributions to the graffiti wall were limited, but did reflect the nature of graffiti, and there may have been some 'copycat' comments. Although data analysis was relatively straightforward, given the nature of the data (short comments and simple drawings), the process proved to be extremely time-consuming. This was despite the modest

  16. Developing participatory research in radiology: the use of a graffiti wall, cameras and a video box in a Scottish radiology department.

    Science.gov (United States)

    Mathers, Sandra A; Anderson, Helen; McDonald, Sheila; Chesson, Rosemary A

    2010-03-01

    Participatory research is increasingly advocated for use in health and health services research and has been defined as a 'process of producing new knowledge by systematic enquiry, with the collaboration of those being studied'. The underlying philosophy of participatory research is that those recruited to studies are acknowledged as experts who are 'empowered to truly participate and have their voices heard'. Research methods should enable children to express themselves. This has led to the development of creative approaches of working with children that offer alternatives to, for instance, the structured questioning of children by researchers either through questionnaires or interviews. To examine the feasibility and potential of developing participatory methods in imaging research. We employed three innovative methods of data collection sequentially, namely the provision of: 1) a graffiti wall; 2) cameras, and 3) a video box for children's use. While the graffiti wall was open to all who attended the department, for the other two methods children were allocated to each 'arm' consecutively until our target of 20 children for each was met. The study demonstrated that it was feasible to use all three methods of data collection within the context of a busy radiology department. We encountered no complaints from staff, patients or parents. Children were willing to participate but we did not collect data to establish if they enjoyed the activities, were pleased to have the opportunity to make comments or whether anxieties about their treatment inhibited their participation. The data yield was disappointing. In particular, children's contributions to the graffiti wall were limited, but did reflect the nature of graffiti, and there may have been some 'copycat' comments. Although data analysis was relatively straightforward, given the nature of the data (short comments and simple drawings), the process proved to be extremely time-consuming. This was despite the modest

  17. The Camera-Based Assessment Survey System (C-BASS): A towed camera platform for reef fish abundance surveys and benthic habitat characterization in the Gulf of Mexico

    Science.gov (United States)

    Lembke, Chad; Grasty, Sarah; Silverman, Alex; Broadbent, Heather; Butcher, Steven; Murawski, Steven

    2017-12-01

    An ongoing challenge for fisheries management is to provide cost-effective and timely estimates of habitat stratified fish densities. Traditional approaches use modified commercial fishing gear (such as trawls and baited hooks) that have biases in species selectivity and may also be inappropriate for deployment in some habitat types. Underwater visual and optical approaches offer the promise of more precise and less biased assessments of relative fish abundance, as well as direct estimates of absolute fish abundance. A number of video-based approaches have been developed and the technology for data acquisition, calibration, and synthesis has been developing rapidly. Beginning in 2012, our group of engineers and researchers at the University of South Florida has been working towards the goal of completing large scale, video-based surveys in the eastern Gulf of Mexico. This paper discusses design considerations and development of a towed camera system for collection of video-based data on commercially and recreationally important reef fishes and benthic habitat on the West Florida Shelf. Factors considered during development included potential habitat types to be assessed, sea-floor bathymetry, vessel support requirements, personnel requirements, and cost-effectiveness of system components. This regional-specific effort has resulted in a towed platform called the Camera-Based Assessment Survey System, or C-BASS, which has proven capable of surveying tens of kilometers of video transects per day and has the ability to cost-effective population estimates of reef fishes and coincident benthic habitat classification.

  18. Developing participatory research in radiology: the use of a graffiti wall, cameras and a video box in a Scottish radiology department

    Energy Technology Data Exchange (ETDEWEB)

    Mathers, Sandra A. [Aberdeen Royal Infirmary, Department of Radiology, Aberdeen (United Kingdom); The Robert Gordon University, Faculty of Health and Social Care, Aberdeen (United Kingdom); Anderson, Helen [Royal Aberdeen Children' s Hospital, Department of Radiology, Aberdeen (United Kingdom); McDonald, Sheila [Royal Aberdeen Children' s Hospital, Aberdeen (United Kingdom); Chesson, Rosemary A. [University of Aberdeen, School of Medicine and Dentistry, Aberdeen (United Kingdom)

    2010-03-15

    Participatory research is increasingly advocated for use in health and health services research and has been defined as a 'process of producing new knowledge by systematic enquiry, with the collaboration of those being studied'. The underlying philosophy of participatory research is that those recruited to studies are acknowledged as experts who are 'empowered to truly participate and have their voices heard'. Research methods should enable children to express themselves. This has led to the development of creative approaches of working with children that offer alternatives to, for instance, the structured questioning of children by researchers either through questionnaires or interviews. To examine the feasibility and potential of developing participatory methods in imaging research. We employed three innovative methods of data collection sequentially, namely the provision of: 1) a graffiti wall; 2) cameras, and 3) a video box for children's use. While the graffiti wall was open to all who attended the department, for the other two methods children were allocated to each 'arm' consecutively until our target of 20 children for each was met. The study demonstrated that it was feasible to use all three methods of data collection within the context of a busy radiology department. We encountered no complaints from staff, patients or parents. Children were willing to participate but we did not collect data to establish if they enjoyed the activities, were pleased to have the opportunity to make comments or whether anxieties about their treatment inhibited their participation. The data yield was disappointing. In particular, children's contributions to the graffiti wall were limited, but did reflect the nature of graffiti, and there may have been some 'copycat' comments. Although data analysis was relatively straightforward, given the nature of the data (short comments and simple drawings), the process proved to be

  19. Synchronous and Rhythmic Vocalizations and Correlated Underwater Behavior of Free-ranging Atlantic Spotted Dolphins (Stenella frontalis and Bottlenose Dolphins (Tursiops truncatus in the Bahamas

    Directory of Open Access Journals (Sweden)

    Denise L. Herzing

    2015-02-01

    Full Text Available Since 1985 a resident community of Atlantic spotted dolphins (Stenella frontalis, and bottlenose dolphins (Tursiops truncatus, have been studied underwater in the Bahamas. Over 200 individuals of both species have been identified and observed over the years. Basic correlations with sound patterns and behavior such as whistles during contact/reunions and squawks during aggression have been reported. This paper describes a small subset of their vocal repertoire that involves synchronous/rhythmic sound production. Dolphin behavior was recorded underwater using underwater video cameras with hydrophone input. Vocalizations were correlated with basic underwater behavioral activity and analyzed using Raven 1.3. Spotted dolphins were observed using two types of synchronized vocalizations including synchronized squawks (burst pulsed vocalizations and screams- (overlapping FM whistles during intraspecific and interspecific aggression. Bottlenose dolphins used three types of synchronized vocalizations; whistles/buzz bouts, bray/buzz bouts, and buzz bouts during intraspecific aggression. Body postures were synchronous with physical movements and often mirrored the rhythm of the vocalizations. The intervals between highly synchronized vocalizations had small variance and created a rhythmic quality and cadence to the acoustic sequences. Three types of vocalizations had similar ratios of sound duration to the spacing between sounds (Screams, whistle/buzz bouts, and bray/buzz bouts. Temporal aspects of sequences of sound and postures may be important aspects of individual and group coordination and behavior in delphinids.

  20. Underwater Geotechnical Foundations

    National Research Council Canada - National Science Library

    Lee, Landris

    2001-01-01

    This report provides an overview and description of the design and construction of underwater geotechnical foundations and offers preliminary guidance based on past and current technology applications...

  1. Evaluation of smart video for transit event detection : final report.

    Science.gov (United States)

    2009-06-01

    Transit agencies are increasingly using video cameras to fight crime and terrorism. As the volume of video data increases, the existing digital video surveillance systems provide the infrastructure only to capture, store and distribute video, while l...

  2. A Combined Radio and Underwater Wireless Optical Communication System based on Buoys

    Science.gov (United States)

    Song, Yuhang; Tong, Zheng; Cong, Bo; Yu, Xiangyu; Kong, Meiwei; Lin, Aobo

    2016-02-01

    We propose a system of combining radio and underwater wireless optical communication based on buoys for real-time image and video transmission between underwater vehicles and the base station on the shore. We analysis how the BER performance is affected by the link distance and the deflection angle of the light source using Monte Carlo simulation.

  3. Underwater Scene Composition

    Science.gov (United States)

    Kim, Nanyoung

    2009-01-01

    In this article, the author describes an underwater scene composition for elementary-education majors. This project deals with watercolor with crayon or oil-pastel resist (medium); the beauty of nature represented by fish in the underwater scene (theme); texture and pattern (design elements); drawing simple forms (drawing skill); and composition…

  4. An underwater robot controls water tanks in nuclear power plants

    International Nuclear Information System (INIS)

    Lardiere, C.

    2015-01-01

    The enterprises Newton Research Labs and IHI Southwest Technologies have developed a robot equipped with sensors to inspect the inside walls (partially) and bottom of water tanks without being obliged to empty them. The robot called 'Inspector' is made up of 4 main components: a chassis with 4 independent steering wheels, a camera video system able to provide a 360 degree view, various non-destructive testing devices such as underwater laser scanners, automated ultra-sound or Foucault current probes and an operation system for both driving the robot and controlling the testing. The Inspector robot has been used to inspect the inside bottom of an operating condensate tank at the Palo Verde nuclear station. The robot was able to check all the welds joining the bottom plates and the welds between the walls and the bottom. The robot is also able to come back to the exact place where a defect was detected during a previous inspection. (A.C.)

  5. An evaluation of deep-sea benthic megafauna length measurements obtained with laser and stereo camera methods

    Science.gov (United States)

    Dunlop, Katherine M.; Kuhnz, Linda A.; Ruhl, Henry A.; Huffard, Christine L.; Caress, David W.; Henthorn, Richard G.; Hobson, Brett W.; McGill, Paul; Smith, Kenneth L.

    2015-02-01

    The 25 year time-series collected at Station M, ~4000 m on the Monterey Deep-sea Fan, has substantially improved understanding of the role of the deep-ocean benthic environment in the global carbon cycle. However, the role of deep-ocean benthic megafauna in carbon bioturbation, remineralization and sequestration is relatively unknown. It is important to gather both accurate and precise measurements of megafaunal community abundance, size distribution and biomass to further define their role in deep-sea carbon cycling and possible sequestration. This study describes initial results from a stereo camera system attached to a remotely operated vehicle and analyzed using the EventMeasure photogrammetric measurement software to estimate the density, length and biomass of 10 species of mobile epibenthic megafauna. Stereo length estimates were compared to those from a single video camera system equipped with sizing lasers and analyzed using the Monterey Bay Aquarium Research Institute's Video Annotation and Reference System. Both camera systems and software were capable of high measurement accuracy and precision (megafauna species studied. The stereo image analysis process took substantially longer than the video analysis and the value of the EventMeasure software tool would be improved with developments in analysis automation. The stereo system is less influenced by object orientation and height, and is potentially a useful tool to be mounted on an autonomous underwater vehicle and for measuring deep-sea pelagic animals where the use of lasers is not feasible.

  6. SEFIS Video Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is a fishery-independent survey that collects data on reef fish in southeast US waters using multiple gears, including chevron traps, video cameras, ROVs,...

  7. Long Wavelength Video-Based Event Detection, Preliminary Results from the CVNX and VS1 Test Series, Ex-USS SHADWELL, April 7-25, 2003

    National Research Council Canada - National Science Library

    Steinhurst, Daniel

    2003-01-01

    ... to flaming fires and other hot objects when compared to co-located regular video cameras. Video event detection with long wavelength cameras is discussed and compared with the results of video event detection systems using regular cameras...

  8. Reconstruction Accuracy Assessment of Surface and Underwater 3D Motion Analysis: A New Approach

    Directory of Open Access Journals (Sweden)

    Kelly de Jesus

    2015-01-01

    Full Text Available This study assessed accuracy of surface and underwater 3D reconstruction of a calibration volume with and without homography. A calibration volume (6000 × 2000 × 2500 mm with 236 markers (64 above and 88 underwater control points—with 8 common points at water surface—and 92 validation points was positioned on a 25 m swimming pool and recorded with two surface and four underwater cameras. Planar homography estimation for each calibration plane was computed to perform image rectification. Direct linear transformation algorithm for 3D reconstruction was applied, using 1600000 different combinations of 32 and 44 points out of the 64 and 88 control points for surface and underwater markers (resp.. Root Mean Square (RMS error with homography of control and validations points was lower than without it for surface and underwater cameras (P≤0.03. With homography, RMS errors of control and validation points were similar between surface and underwater cameras (P≥0.47. Without homography, RMS error of control points was greater for underwater than surface cameras (P≤0.04 and the opposite was observed for validation points (P≤0.04. It is recommended that future studies using 3D reconstruction should include homography to improve swimming movement analysis accuracy.

  9. Remote Underwater Characterization System - Innovative Technology Summary Report

    International Nuclear Information System (INIS)

    Willis, Walter David

    1999-01-01

    Characterization and inspection of water-cooled and moderated nuclear reactors and fuel storage pools requires equipment capable of operating underwater. Similarly, the deactivation and decommissioning of older nuclear facilities often requires the facility owner to accurately characterize underwater structures and equipment which may have been sitting idle for years. The underwater characterization equipment is often required to operate at depths exceeding 20 ft (6.1 m) and in relatively confined or congested spaces. The typical baseline approach has been the use of radiation detectors and underwater cameras mounted on long poles, or stationary cameras with pan and tilt features mounted on the sides of the underwater facility. There is a perceived need for an inexpensive, more mobile method of performing close-up inspection and radiation measurements in confined spaces underwater. The Remote Underwater Characterization System (RUCS) is a small, remotely operated submersible vehicle intended to serve multiple purposes in underwater nuclear operations. It is based on the commercially-available ''Scallop'' vehicle, but has been modified by Department of Energy's Robotics Technology Development Program to add auto-depth control, and vehicle orientation and depth monitoring at the operator control panel. The RUCS is designed to provide visual and gamma radiation characterization, even in confined or limited access areas. It was demonstrated in August 1998 at Idaho National Engineering and environmental Laboratory (INEEL) as part of the INEEL Large Scale Demonstration and Deployment Project. During the demonstration it was compared in a ''head-to-head'' fashion with the baseline characterization technology. This paper summarizes the results of the demonstration and lessons learned; comparing and contrasting both technologies in the areas of cost, visual characterization, radiological characterization, and overall operations

  10. Video Golf

    Science.gov (United States)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  11. The Use of Camera Traps in Wildlife

    OpenAIRE

    Yasin Uçarlı; Bülent Sağlam

    2013-01-01

    Camera traps are increasingly used in the abundance and density estimates of wildlife species. Camera traps are very good alternative for direct observation in case, particularly, steep terrain, dense vegetation covered areas or nocturnal species. The main reason for the use of camera traps is eliminated that the economic, personnel and time loss in a continuous manner at the same time in different points. Camera traps, motion and heat sensitive, can take a photo or video according to the mod...

  12. Underwater 3D filming

    Directory of Open Access Journals (Sweden)

    Roberto Rinaldi

    2014-12-01

    Full Text Available After an experimental phase of many years, 3D filming is now effective and successful. Improvements are still possible, but the film industry achieved memorable success on 3D movie’s box offices due to the overall quality of its products. Special environments such as space (“Gravity” and the underwater realm look perfect to be reproduced in 3D. “Filming in space” was possible in “Gravity” using special effects and computer graphic. The underwater realm is still difficult to be handled. Underwater filming in 3D was not that easy and effective as filming in 2D, since not long ago. After almost 3 years of research, a French, Austrian and Italian team realized a perfect tool to film underwater, in 3D, without any constrains. This allows filmmakers to bring the audience deep inside an environment where they most probably will never have the chance to be.

  13. Video Liveness for Citizen Journalism: Attacks and Defenses

    OpenAIRE

    Rahman, Mahmudur; Azimpourkivi, Mozhgan; Topkara, Umut; Carbunar, Bogdan

    2017-01-01

    The impact of citizen journalism raises important video integrity and credibility issues. In this article, we introduce Vamos, the first user transparent video "liveness" verification solution based on video motion, that accommodates the full range of camera movements, and supports videos of arbitrary length. Vamos uses the agreement between video motion and camera movement to corroborate the video authenticity. Vamos can be integrated into any mobile video capture application without requiri...

  14. Underwater Glider System Study

    OpenAIRE

    Jenkins, Scott A; Humphreys, Douglas E; Sherman, Jeff; Osse, Jim; Jones, Clayton; Leonard, Naomi; Graver, Joshua; Bachmayer, Ralf; Clem, Ted; Carroll, Paul; Davis, Philip; Berry, Jon; Worley, Paul; Wasyl, Joseph

    2003-01-01

    The goals of this study are to determine how to advance from present capabilities of underwater glider (and hybrid motorglider) technology to what could be possible within the next few years; and to identify critical research issues that must be resolved to make such advancements possible. These goals were pursued by merging archival flight data with numerical model results and system spreadsheet analysis to extrapolate from the present state-of-the–art in underwater (UW) gliders to potential...

  15. Cost effective system for monitoring of fish migration with a camera

    Science.gov (United States)

    Sečnik, Matej; Brilly, Mitja; Vidmar, Andrej

    2016-04-01

    Within the European LIFE project Ljubljanica connects (LIFE10 NAT/SI/000142) we have developed a cost-effective solution for the monitoring of fish migration through the fish passes with the underwater camera. In the fish pass at Ambrožev trg and in the fish pass near the Fužine castle we installed a video camera called "Fishcam" to be able to monitor the migration of fish through the fish passes and success of its reconstruction. Live stream from fishcams installed in the fishpassesis available on our project website (http://ksh.fgg.uni-lj.si/ljubljanicaconnects/ang/12_camera). The system for the fish monitoring is made from two parts. First is the waterproof box for the computer with charger and the second part is the camera itself. We used a high sensitive Sony analogue camera. The advantage of this camera is that it has very good sensitivity in low light conditions, so it can take good quality pictures even at night with a minimum additional lighting. For the night recording we use additional IR reflector to illuminate passing fishes. The camera is connected to an 8-inch tablet PC. We decided to use a tablet PC because it is quite small, cheap, it is relatively fast and has a low power consumption. On the computer we use software which has advanced motion detection capabilities, so we can also detect the small fishes. When the fish is detected by a software, its photograph is automatically saved to local hard drive and for backup also on Google drive. The system for monitoring of fish migration has turned out to work very well. From the beginning of monitoring in June 2015 to end of the year there were more than 100.000 photographs produced. The first analysis of them was already prepared estimating fish species and their frequency in passing the fish pass.

  16. Tracing Sequential Video Production

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin; Khalid, Md. Saifuddin

    2015-01-01

    With an interest in learning that is set in collaborative situations, the data session presents excerpts from video data produced by two of fifteen students from a class of 5th semester techno-anthropology course. Students used video cameras to capture the time they spent working with a scientist...

  17. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  18. Localization of dolphin whistles through frequency domain beamforming using a narrow aperture audio/video array

    Science.gov (United States)

    Ball, Keenan R.; Buck, John R.

    2003-04-01

    Correlating the acoustic and physical behavior of marine mammals is an ongoing challenge for scientists studying the links between acoustic communication and social behavior of these animals. This talk describes a system to record and correlate the physical and acoustical behavior of dolphins. A sparse, short baseline audio/video array consisting of 16 hydrophones and an underwater camera was constructed in a cross configuration to measure the acoustic signals of vocalizing dolphins. The bearings of vocalizing dolphins were estimated using the broadband frequency domain beamforming algorithm for sparse arrays to suppress grating lobes of Thode et al. [J. Acoust. Soc. Am. 107 (2000)]. The estimated bearings from the acoustic signals were then converted to video image coordinates and a marker was placed on the video image. The system was calibrated both at an indoor tank and from an outdoor dock at UMass Dartmouth prior to field tests in a natural lagoon at the Dolphin Connection on Duck Key, FL. These tests confirmed that the system worked well within the limits of underwater visibility by consistently placing the marker on or near the whistling or echolocating dolphin. [Work supported by NSF Ocean Sciences.

  19. An Innovative Streaming Video System With a Point-of-View Head Camera Transmission of Surgeries to Smartphones and Tablets: An Educational Utility.

    Science.gov (United States)

    Chaves, Rafael Oliveira; de Oliveira, Pedro Armando Valente; Rocha, Luciano Chaves; David, Joacy Pedro Franco; Ferreira, Sanmari Costa; Santos, Alex de Assis Santos Dos; Melo, Rômulo Müller Dos Santos; Yasojima, Edson Yuzur; Brito, Marcus Vinicius Henriques

    2017-10-01

    In order to engage medical students and residents from public health centers to utilize the telemedicine features of surgery on their own smartphones and tablets as an educational tool, an innovative streaming system was developed with the purpose of streaming live footage from open surgeries to smartphones and tablets, allowing the visualization of the surgical field from the surgeon's perspective. The current study aims to describe the results of an evaluation on level 1 of Kirkpatrick's Model for Evaluation of the streaming system usage during gynecological surgeries, based on the perception of medical students and gynecology residents. Consisted of a live video streaming (from the surgeon's point of view) of gynecological surgeries for smartphones and tablets, one for each volunteer. The volunteers were able to connect to the local wireless network, created by the streaming system, through an access password and watch the video transmission on a web browser on their smartphones. Then, they answered a Likert-type questionnaire containing 14 items about the educational applicability of the streaming system, as well as comparing it to watching an in loco procedure. This study is formally approved by the local ethics commission (Certificate No. 53175915.7.0000.5171/2016). Twenty-one volunteers participated, totalizing 294 items answered, in which 94.2% were in agreement with the items affirmative, 4.1% were neutral, and only 1.7% answers corresponded to negative impressions. Cronbach's α was .82, which represents a good reliability level. Spearman's coefficients were highly significant in 4 comparisons and moderately significant in the other 20 comparisons. This study presents a local streaming video system of live surgeries to smartphones and tablets and shows its educational utility, low cost, and simple usage, which offers convenience and satisfactory image resolution, thus being potentially applicable in surgical teaching.

  20. Underwater Visual Computing: The Grand Challenge Just around the Corner.

    Science.gov (United States)

    von Lukas, Uwe Freiherr

    2016-01-01

    Visual computing technologies have traditionally been developed for conventional setups where air is the surrounding medium for the user, the display, and/or the camera. However, given mankind's increasingly need to rely on the oceans to solve the problems of future generations (such as offshore oil and gas, renewable energies, and marine mineral resources), there is a growing need for mixed-reality applications for use in water. This article highlights the various research challenges when changing the medium from air to water, introduces the concept of underwater mixed environments, and presents recent developments in underwater visual computing applications.

  1. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  2. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  3. "Boxnep" advanced modular underwater robot

    OpenAIRE

    Buluev, Ilia

    2016-01-01

    The article discusses the relevance of the underwater vehicles' ability to solve a wide range of problems. The idea put in the basis of this research is designing a modular underwater robot. It allows to mount various equipment and test it in underwater environment. The paper deals with the concept of the robot and its characteristics.

  4. Resources for Underwater Robotics Education

    Science.gov (United States)

    Wallace, Michael L.; Freitas, William M.

    2016-01-01

    4-H clubs can build and program underwater robots from raw materials. An annotated resource list for engaging youth in building underwater remotely operated vehicles (ROVs) is provided. This article is a companion piece to the Research in Brief article "Building Teen Futures with Underwater Robotics" in this issue of the "Journal of…

  5. A Case Study of Trust Issues in Scientific Video Collections

    NARCIS (Netherlands)

    E.M.A.L. Beauxis-Aussalet (Emmanuelle); E. Arslanova (Elvira); L. Hardman (Lynda); J.R. van Ossenbruggen (Jacco)

    2013-01-01

    htmlabstractIn-situ video recording of underwater ecosystems is able to provide valuable information for biology research and natural resources management, e.g. changes in species abundance. Searching the videos manually, however, requires costly human effort. Our video analysis tool supports the

  6. Underwater Shock Response Analysis of a Floating Vessel

    Directory of Open Access Journals (Sweden)

    J.E. van Aanhold

    1998-01-01

    Full Text Available The response of a surface vessel to underwater shock has been calculated using an explicit finite element analysis. The analysis model is two-dimensional and contains the floating steel structure, a large surrounding water volume and the free surface. The underwater shock is applied in the form of a plane shock wave and cavitation is considered in the analysis. Advanced computer graphics, in particular video animations, provide a powerful and indispensable means for the presentation and evaluation of the analysis results.

  7. Distributed Smart Cameras for Aging in Place

    National Research Council Canada - National Science Library

    Williams, Adam; Xie, Dan; Ou, Shichao; Grupen, Roderic; Hanson, Allen; Riseman, Edward

    2006-01-01

    .... The fall detector relies on features extracted from video by the camera nodes, which are sent to a central processing node where one of several machine learning techniques are applied to detect a fall...

  8. The development of the underwater inspection vehicles for nuclear power plants

    International Nuclear Information System (INIS)

    Mabuchi, Yasuhiro; Takahashi, Yoshinori; Suzuki, Masanori

    2003-01-01

    There are many underwater structures in the Nuclear Power Plants (NPPs), and due to high radiation and underwater condition it's very difficult to carry out inspections in these areas. Remotely Operated Vehicles (ROVs) equipped with some thrusters and a CCD camera, have been in use for underwater remote inspections for the structure. Because these conventional ROVs for nuclear power plants can't acquire stable images and/or do not have any tools except for a camera, they have been applied to the restricted inspection tasks for nuclear power plants. HITACHI has been developing several ROVs, which are equipped with some additional functions and devices, in order to improve the performance of the conventional ROVs. These ROVs have been applied to the real NPPs and have proven to be useful and effective in the underwater inspection in the NPPs. (author)

  9. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  10. Active spectral imaging nondestructive evaluation (SINDE) camera

    Energy Technology Data Exchange (ETDEWEB)

    Simova, E.; Rochefort, P.A., E-mail: eli.simova@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)

    2016-06-15

    A proof-of-concept video camera for active spectral imaging nondestructive evaluation has been demonstrated. An active multispectral imaging technique has been implemented in the visible and near infrared by using light emitting diodes with wavelengths spanning from 400 to 970 nm. This shows how the camera can be used in nondestructive evaluation to inspect surfaces and spectrally identify materials and corrosion. (author)

  11. Laser scanning camera inspects hazardous area

    International Nuclear Information System (INIS)

    Fryatt, A.; Miprode, C.

    1985-01-01

    Main operational characteristics of a new laser scanning camera are presented. The camera is intended primarily for low level high resolution viewing inside nuclear reactors. It uses a He-Ne laser beam raster; by detecting the reflected light by means of a phomultiplier, the subject under observation can be reconstructed in an electronic video store and reviewed on a conventional monitor screen

  12. Turning an Urban Scene Video into a Cinemagraph

    OpenAIRE

    Yan, Hang; Liu, Yebin; Furukawa, Yasutaka

    2016-01-01

    This paper proposes an algorithm that turns a regular video capturing urban scenes into a high-quality endless animation, known as a Cinemagraph. The creation of a Cinemagraph usually requires a static camera in a carefully configured scene. The task becomes challenging for a regular video with a moving camera and objects. Our approach first warps an input video into the viewpoint of a reference camera. Based on the warped video, we propose effective temporal analysis algorithms to detect reg...

  13. Identification of a putative man-made object from an underwater crash site using CAD model superimposition.

    Science.gov (United States)

    Vincelli, Jay; Calakli, Fatih; Stone, Michael; Forrester, Graham; Mellon, Timothy; Jarrell, John

    2018-04-01

    In order to identify an object in video, a comparison with an exemplar object is typically needed. In this paper, we discuss the methodology used to identify an object detected in underwater video that was recorded during an investigation into Amelia Earhart's purported crash site. A computer aided design (CAD) model of the suspected aircraft component was created based on measurements made from orthogonally rectified images of a reference aircraft, and validated against historical photographs of the subject aircraft prior to the crash. The CAD model was then superimposed on the underwater video, and specific features on the object were geometrically compared between the CAD model and the video. This geometrical comparison was used to assess the goodness of fit between the purported object and the object identified in the underwater video. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. The system of underwater CCTV inspection for reactor internal components

    International Nuclear Information System (INIS)

    Zhu Rong

    1997-12-01

    During the operation of nuclear power plant, the reactor internal components are greatly scoured and vibrated by flowing water. So the structural integrity and surface sludge for reactor internal components are needed to be inspected during refuelling. Thus an inspection system is developed, in which the camera inspects underwater at different height and different direction by mechanical elevator and the image of closed-circuit television (CCTV) is mixed with digital coordinate of the camera position for re-inspection. It is the first system for inspection of reactor internal components in China. This system has been used 4 times in the inspection of Daya Bay Nuclear Power Plant successfully

  15. Underwater Gliders: A Review

    Directory of Open Access Journals (Sweden)

    Javaid Muhammad Yasar

    2014-07-01

    Full Text Available Underwater gliders are a type of underwater vehicle that transverse the oceans by shifting its buoyancy, during which its wings develop a component of the downward motion in the horizontal plane, thus producing a forward force. They are primarily used in oceanography sensing and data collection and play an important role in ocean research and development. Although there have been considerable developments in these gliders since the development of the first glider concept in 1989, to date, no review of these gliders have been done. This paper reviews existing underwater gliders, with emphasis on their respective working principles, range and payload capacity. All information on gliders available in the public domain or published in literature from the year 2000-2013 was reviewed. The majority of these gliders have an operational depth of 1000 m and a payload of less than 25 kg. The exception is a blend-body shape glider, which has a payload of approximately 800 kg and an operational depth around about 300 m. However, the commercialization of these gliders has been limited with only three know examples that have been successfully commercialized.

  16. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  17. Deep-Sky Video Astronomy

    CERN Document Server

    Massey, Steve

    2009-01-01

    A guide to using modern integrating video cameras for deep-sky viewing and imaging with the kinds of modest telescopes available commercially to amateur astronomers. It includes an introduction and a brief history of the technology and camera types. It examines the pros and cons of this unrefrigerated yet highly efficient technology

  18. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  19. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  20. Gamma camera

    International Nuclear Information System (INIS)

    Reiss, K.H.; Kotschak, O.; Conrad, B.

    1976-01-01

    A gamma camera with a simplified setup as compared with the state of engineering is described permitting, apart from good localization, also energy discrimination. Behind the usual vacuum image amplifier a multiwire proportional chamber filled with trifluorine bromium methane is connected in series. Localizing of the signals is achieved by a delay line, energy determination by means of a pulse height discriminator. With the aid of drawings and circuit diagrams, the setup and mode of operation are explained. (ORU) [de

  1. Carbon Nanotube Underwater Acoustic Thermophone

    Science.gov (United States)

    2016-09-23

    Attorney Docket No. 300009 1 of 8 A CARBON NANOTUBE UNDERWATER ACOUSTIC THERMOPHONE STATEMENT OF GOVERNMENT INTEREST [0001] The...the Invention [0003] The present invention is an acoustically transparent carbon nanotube thermophone. (2) Description of the Prior Art [0004...amplitude of the resulting sound waves. [0006] Recently, there has been development of underwater acoustic carbon nanotube (CNT) yarn sheets capable

  2. Testing of an underwater remotely-operated vehicle in the basins of the Cattenom nuclear power generation center

    International Nuclear Information System (INIS)

    Delfour, D.; Khakanski, M.; Nepveu, C.; Schmitt, J.

    1993-05-01

    An underwater robot was tested in the basins of the Cattenom Nuclear Power Generation Center fed with raw water from the Moselle River. The purpose was to inspect wall biofouling without interrupting water circulation. The ROV is a light, compact device, remotely controlled by cable and equipped with video cameras. The video recordings made were used to compare conditions in a basin cleaned the previous month by divers with those in a basin which had not been cleaned for a year. Manual cleaning by divers is an effective method, leaving Zebra Mussels on less than 5% of the wall surfaces. On the other hand, the floor of the basin was observed to be covered with fine sediment, vegetal matters and shells washed in with the Moselle River water. In the basin which had not been cleaned, the entire wall surface was covered with very dense tufts of tubular organisms (Hydrozoa Cordylophora) and zebra mussels. The tests have provided elements for definition of an inspection procedure and have given rise to suggestions for complementary equipment. (authors). 5 figs., 9 photos

  3. Underwater Inspection of Navigation Structures with an Acoustic Camera

    Science.gov (United States)

    2013-08-01

    38  Figure 4-11. Masonry wall comparison of visible image and acoustical image, downtown landing, Mississippi River, Vicksburg...early automated mosaic software efforts. ERDC/ITL TR-13-3 39 Figure 4-11. Masonry wall comparison of visible image and acoustical image...areas not accessible by conventional side- scan or multi-beam deployments. Figure 4-19 shows a profile and surface view of various sheet pilings

  4. Software architecture of biomimetic underwater vehicle

    Science.gov (United States)

    Praczyk, Tomasz; Szymak, Piotr

    2016-05-01

    Autonomous underwater vehicles are vehicles that are entirely or partly independent of human decisions. In order to obtain operational independence, the vehicles have to be equipped with a specialized software. The main task of the software is to move the vehicle along a trajectory with collision avoidance. Moreover, the software has also to manage different devices installed on the vehicle board, e.g. to start and stop cameras, sonars etc. In addition to the software embedded on the vehicle board, the software responsible for managing the vehicle by the operator is also necessary. Its task is to define mission of the vehicle, to start, to stop the mission, to send emergency commands, to monitor vehicle parameters, and to control the vehicle in remotely operated mode. An important objective of the software is also to support development and tests of other software components. To this end, a simulation environment is necessary, i.e. simulation model of the vehicle and all its key devices, the model of the sea environment, and the software to visualize behavior of the vehicle. The paper presents architecture of the software designed for biomimetic autonomous underwater vehicle (BAUV) that is being constructed within the framework of the scientific project financed by Polish National Center of Research and Development.

  5. Underwater gas tornado

    Science.gov (United States)

    Byalko, Alexey V.

    2013-07-01

    We present the first experimental observation of a new hydrodynamic phenomenon, the underwater tornado. Simple measurements show that the tornado forms a vortex of the Rankine type, i.e. the rising gas rotates as a solid body and the liquid rotates with a velocity decreasing hyperbolically with the radius. We obtain the dependence of the tornado radius a on the gas stream value j theoretically: a ∼ j2/5. Processing of a set of experiments yielded the value 0.36 for the exponent in this expression. We also report the initial stages of the theoretical study of this phenomenon.

  6. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  7. Optimising camera traps for monitoring small mammals.

    Science.gov (United States)

    Glen, Alistair S; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  8. Automated safety control by video cameras

    NARCIS (Netherlands)

    Lefter, I.; Rothkrantz, L.; Somhorst, M.

    2012-01-01

    At this moment many surveillance systems are installed in public domains to control the safety of people and properties. They are constantly watched by human operators who are easily overloaded. To support the human operators, a surveillance system model is designed that detects suspicious behaviour

  9. Stationary Stereo-Video Camera Stations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Accurate and precise stock assessments are predicated on accurate and precise estimates of life history parameters, abundance, and catch across the range of the...

  10. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  11. TRAFFIC SIGN RECOGNATION WITH VIDEO PROCESSING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Musa AYDIN

    2013-01-01

    Full Text Available In this study, traffic signs are aimed to be recognized and identified from a video image which is taken through a video camera. To accomplish our aim, a traffic sign recognition program has been developed in MATLAB/Simulink environment. The target traffic sign are recognized in the video image with the developed program.

  12. Underwater 3D Surface Measurement Using Fringe Projection Based Scanning Devices.

    Science.gov (United States)

    Bräuer-Burchardt, Christian; Heinze, Matthias; Schmidt, Ingo; Kühmstedt, Peter; Notni, Gunther

    2015-12-23

    In this work we show the principle of optical 3D surface measurements based on the fringe projection technique for underwater applications. The challenges of underwater use of this technique are shown and discussed in comparison with the classical application. We describe an extended camera model which takes refraction effects into account as well as a proposal of an effective, low-effort calibration procedure for underwater optical stereo scanners. This calibration technique combines a classical air calibration based on the pinhole model with ray-based modeling and requires only a few underwater recordings of an object of known length and a planar surface. We demonstrate a new underwater 3D scanning device based on the fringe projection technique. It has a weight of about 10 kg and the maximal water depth for application of the scanner is 40 m. It covers an underwater measurement volume of 250 mm × 200 mm × 120 mm. The surface of the measurement objects is captured with a lateral resolution of 150 μm in a third of a second. Calibration evaluation results are presented and examples of first underwater measurements are given.

  13. Hybrid Underwater Vehicle: ARV Design and Development

    Directory of Open Access Journals (Sweden)

    Zhigang DENG

    2014-02-01

    Full Text Available The development of SMU-I, a new autonomous & remotely-operated vehicle (ARV is described. Since it has both the characteristics of autonomous underwater vehicle (AUV and remote operated underwater vehicle (ROV, it is able to achieve precision fix station operation and manual timely intervention. In the paper the initial design of basic components, such as vehicle, propulsion, batteries etc. and the control design of motion are introduced and analyzed. ROV’s conventional cable is replaced by a fiber optic cable, which makes it available for high-bandwidth real-time video, data telemetry and high-quality teleoperation. Furthermore, with the aid of the manual real-time remote operation and ranging sonar, it also resolves the AUV’s conflicting issue, which can absolutely adapt the actual complex sea environment and satisfy the unknown mission need. The whole battery system is designed as two-battery banks, whose voltages and temperatures are monitored through CAN (controller area network bus to avoid battery fire and explosion. A fuzzy-PID controller is designed for its motion control, including depth control and direction control. The controller synthesizes the advantage of fuzzy control and PID control, utilizes the fuzzy rules to on-line tune the parameters of PID controller, and achieves a better control effect. Experiment results demonstrate to show the effectiveness of the test-bed.

  14. Control of the movement of a ROV camera; Controle de posicionamento da camera de um ROV

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Alexandre S. de; Dutra, Max Suell [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE); Reis, Ney Robinson S. dos [PETROBRAS, Rio de Janeiro, RJ (Brazil). Centro de Pesquisas; Santos, Auderi V. dos [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil)

    2004-07-01

    The ROV's (Remotely Operated Vehicles) are used for installation and maintenance of underwater exploration systems in the oil industry. These systems are operated in distant areas thus being of essential importance the use of a cameras for the visualization of the work area. The synchronization necessary in the accomplishment of the tasks when operating the manipulator and the movement of the camera for the operator is a complex task. For the accomplishment of this synchronization is presented in this work the analysis of the interconnection of the systems. The concatenation of the systems is made through the interconnection of the electric signals of the proportional valves of the actuators of the manipulator with the signals of the proportional valves of the actuators of the camera. With this interconnection the approach accompaniment of the movement of the manipulator for the camera, keeping the object of the visualization of the field of vision of the operator is obtained. (author)

  15. Gamma camera

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    The light pulse output of a scintillator, on which incident collimated gamma rays impinge, is detected by an array of photoelectric tubes each having a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to outputs of the phototubes develops the scintillation event position coordinate electrical signals with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as is possible to obtain less distortion in the field of view and improved spatial resolution as compared to conventional planar photocathode gamma cameras

  16. Underwater Hearing in Turtles.

    Science.gov (United States)

    Willis, Katie L

    2016-01-01

    The hearing of turtles is poorly understood compared with the other reptiles. Although the mechanism of transduction of sound into a neural signal via hair cells has been described in detail, the rest of the auditory system is largely a black box. What is known is that turtles have higher hearing thresholds than other reptiles, with best frequencies around 500 Hz. They also have lower underwater hearing thresholds than those in air, owing to resonance of the middle ear cavity. Further studies demonstrated that all families of turtles and tortoises share a common middle ear cavity morphology, with scaling best suited to underwater hearing. This supports an aquatic origin of the group. Because turtles hear best under water, it is important to examine their vulnerability to anthropogenic noise. However, the lack of basic data makes such experiments difficult because only a few species of turtles have published audiograms. There are also almost no behavioral data available (understandable due to training difficulties). Finally, few studies show what kinds of sounds are behaviorally relevant. One notable paper revealed that the Australian snake-necked turtle (Chelodina oblonga) has a vocal repertoire in air, at the interface, and under water. Findings like these suggest that there is more to the turtle aquatic auditory scene than previously thought.

  17. The Effect of Nano-Aluminumpowder on the Characteristic of RDX based Aluminized Explosives Underwater Close-Filed Explosion

    OpenAIRE

    Junting Yin; Baohui Yuan; Tao Zhou; Gang Li; Xinlian Ren

    2017-01-01

    In order to investigate the effect of nano-aluminum powder on the characteristic of RDX based aluminized explosives underwater closed-filed explosions, the scanning photographs along the radial of the charges were gained by a high speed scanning camera. The photographs of two different aluminized explosives underwater explosion have been analyzed, the shock wave curves and expand curves of detonation products were obtained, furthermore the change rules of shock waves propagation velocity, sho...

  18. QFD-based conceptual design of an autonomous underwater robot

    Directory of Open Access Journals (Sweden)

    Thip Pasawang

    2015-12-01

    Full Text Available Autonomous underwater robots in the past few years have been designed according to the individual concepts and experiences of the researchers. To design a robot, which meets all the requirements of potential users, is an advanced work. Hence, a systematic design method that could include users’ preferences and requirements is needed. This paper presents the quality function deployment (QFD technique to design an autonomous underwater robot focusing on the Thai Navy military mission. Important user requirements extracted from the QFD method are the ability to record videos, operating at depth up to 10 meters, the ability to operate remotely with cable and safety concerns related to water leakages. Less important user requirements include beauty, using renewable energy, operating remotely with radio and ability to work during night time. The important design parameters derived from the user requirements are a low cost-controller, an autonomous control algorithm, a compass sensor and vertical gyroscope, and a depth sensor. Of low-importance ranked design parameters include the module design, use clean energy, a low noise electric motor, remote surveillance design, a pressure hull, and a beautiful hull form design. The study results show the feasibility of using QFD techniques to systematically design the autonomous underwater robot to meet user requirements. Mapping between the design and expected parameters and a conceptual drafting design of an autonomous underwater robot are also presented.

  19. Non-line-of-sight underwater optical wireless communication network.

    Science.gov (United States)

    Arnon, Shlomi; Kedar, Debbie

    2009-03-01

    The growing need for ocean observation systems has stimulated considerable interest within the research community in advancing the enabling technologies of underwater wireless communication and underwater sensor networks. Sensors and ad hoc sensor networks are the emerging tools for performing extensive data-gathering operations on land, and solutions in the subsea setting are being sought. Efficient communication from the sensors and within the network is critical, but the underwater environment is extremely challenging. Addressing the special features of underwater wireless communication in sensor networks, we propose a novel non-line-of-sight network concept in which the link is implemented by means of back-reflection of the propagating optic signal at the ocean-air interface and derive a mathematical model of the channel. Point-to-multipoint links can be achieved in an energy efficient manner and broadcast broadband communications, such as video transmissions, can be executed. We show achievable bit error rates as a function of sensor node separation and demonstrate the feasibility of this concept using state-of-the-art silicon photomultiplier detectors.

  20. OFDM for underwater acoustic communications

    CERN Document Server

    Zhou, Shengli

    2014-01-01

    A blend of introductory material and advanced signal processing and communication techniques, of critical importance to underwater system and network development This book, which is the first to describe the processing techniques central to underwater OFDM, is arranged into four distinct sections: First, it describes the characteristics of underwater acoustic channels, and stresses the difference from wireless radio channels. Then it goes over the basics of OFDM and channel coding. The second part starts with an overview of the OFDM receiver, and develops various modules for the receiver des

  1. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  2. Evaluation of commercial video-based intersection signal actuation systems.

    Science.gov (United States)

    2008-12-01

    Video cameras and computer image processors have come into widespread use for the detection of : vehicles for signal actuation at controlled intersections. Video is considered both a cost-saving and : convenient alternative to conventional stop-line ...

  3. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  4. Underwater plasma arc cutting

    International Nuclear Information System (INIS)

    Leautier, R.; Pilot, G.

    1991-01-01

    This report describes the work done to develop underwater plasma arc cutting techniques, to characterise aerosols from cutting operations on radioactive and non-radioactive work-pieces, and to develop suitable ventilation and filtration techniques. The work has been carried out in the framework of a contract between CEA-CEN Cadarache and the Commission of European Communities. Furthermore, this work has been carried out in close cooperation with CEA-CEN Saclay mainly for secondary emissions and radioactive analysis. The contract started in May 1986 and was completed in December 1988 by a supplementary agreement. This report has been compiled from several progress reports submitted during the work period, contains the main findings of the work and encloses the results of comparative tests on plasma arc cutting

  5. Stereoscopic camera design

    Science.gov (United States)

    Montgomery, David J.; Jones, Christopher K.; Stewart, James N.; Smith, Alan

    2002-05-01

    It is clear from the literature that the majority of work in stereoscopic imaging is directed towards the development of modern stereoscopic displays. As costs come down, wider public interest in this technology is expected to increase. This new technology would require new methods of image formation. Advances in stereo computer graphics will of course lead to the creation of new stereo computer games, graphics in films etc. However, the consumer would also like to see real-world stereoscopic images, pictures of family, holiday snaps etc. Such scenery would have wide ranges of depth to accommodate and would need also to cope with moving objects, such as cars, and in particular other people. Thus, the consumer acceptance of auto/stereoscopic displays and 3D in general would be greatly enhanced by the existence of a quality stereoscopic camera. This paper will cover an analysis of existing stereoscopic camera designs and show that they can be categorized into four different types, with inherent advantages and disadvantages. A recommendation is then made with regard to 3D consumer still and video photography. The paper will go on to discuss this recommendation and describe its advantages and how it can be realized in practice.

  6. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  7. Safety aspects for underwater vehicles

    Digital Repository Service at National Institute of Oceanography (India)

    Madhan, R.; Navelkar, G.S.; Desa, E.S.; Afzulpurkar, S.; Prabhudesai, S.P.; Dabholkar, N.; Mascarenhas, A.A.M.Q.; Maurya, P.

    . This stresses for implementation of multiple safety measures of a high degree so that the platform operates continuously in a fail-safe mode. This paper discusses issues on safety measures implemented on the autonomous underwater platforms namely MAYA AUV...

  8. The PLATO camera

    Science.gov (United States)

    Laubier, D.; Bodin, P.; Pasquier, H.; Fredon, S.; Levacher, P.; Vola, P.; Buey, T.; Bernardi, P.

    2017-11-01

    PLATO (PLAnetary Transits and Oscillation of stars) is a candidate for the M3 Medium-size mission of the ESA Cosmic Vision programme (2015-2025 period). It is aimed at Earth-size and Earth-mass planet detection in the habitable zone of bright stars and their characterisation using the transit method and the asterosismology of their host star. That means observing more than 100 000 stars brighter than magnitude 11, and more than 1 000 000 brighter than magnitude 13, with a long continuous observing time for 20 % of them (2 to 3 years). This yields a need for an unusually long term signal stability. For the brighter stars, the noise requirement is less than 34 ppm.hr-1/2, from a frequency of 40 mHz down to 20 μHz, including all sources of noise like for instance the motion of the star images on the detectors and frequency beatings. Those extremely tight requirements result in a payload consisting of 32 synchronised, high aperture, wide field of view cameras thermally regulated down to -80°C, whose data are combined to increase the signal to noise performances. They are split into 4 different subsets pointing at 4 directions to widen the total field of view; stars in the centre of that field of view are observed by all 32 cameras. 2 extra cameras are used with color filters and provide pointing measurement to the spacecraft Attitude and Orbit Control System (AOCS) loop. The satellite is orbiting the Sun at the L2 Lagrange point. This paper presents the optical, electronic and electrical, thermal and mechanical designs devised to achieve those requirements, and the results from breadboards developed for the optics, the focal plane, the power supply and video electronics.

  9. Digital video recording and archiving in ophthalmic surgery

    Directory of Open Access Journals (Sweden)

    Raju Biju

    2006-01-01

    Full Text Available Currently most ophthalmic operating rooms are equipped with an analog video recording system [analog Charge Couple Device camera for video grabbing and a Video Cassette Recorder for recording]. We discuss the various advantages of a digital video capture device, its archiving capabilities and our experience during the transition from analog to digital video recording and archiving. The basic terminology and concepts related to analog and digital video, along with the choice of hardware, software and formats for archiving are discussed.

  10. Advanced real-time manipulation of video streams

    CERN Document Server

    Herling, Jan

    2014-01-01

    Diminished Reality is a new fascinating technology that removes real-world content from live video streams. This sensational live video manipulation actually removes real objects and generates a coherent video stream in real-time. Viewers cannot detect modified content. Existing approaches are restricted to moving objects and static or almost static cameras and do not allow real-time manipulation of video content. Jan Herling presents a new and innovative approach for real-time object removal with arbitrary camera movements.

  11. Streaming Audio and Video: New Challenges and Opportunities for Museums.

    Science.gov (United States)

    Spadaccini, Jim

    Streaming audio and video present new challenges and opportunities for museums. Streaming media is easier to author and deliver to Internet audiences than ever before; digital video editing is commonplace now that the tools--computers, digital video cameras, and hard drives--are so affordable; the cost of serving video files across the Internet…

  12. Application of megapixel video monitoring system

    International Nuclear Information System (INIS)

    Xu Tao; Liu Qiang

    2012-01-01

    This paper expounds the advantages of Megapixel camera, and the structure of million pixels video monitoring system, puts forward to solve the key technical of resolution and frame rate combined with the actual engineering requirements, realizes the core technology of megapixel video monitoring system, gives the design method of million pixels video, data compression, data transmission, data storage and video server, and puts forward effective solutions in construction of the problems during the implementation. (authors)

  13. Underwater optical wireless communication network

    Science.gov (United States)

    Arnon, Shlomi

    2010-01-01

    The growing need for underwater observation and subsea monitoring systems has stimulated considerable interest in advancing the enabling technologies of underwater wireless communication and underwater sensor networks. This communication technology is expected to play an important role in investigating climate change, in monitoring biological, biogeochemical, evolutionary, and ecological changes in the sea, ocean, and lake environments, and in helping to control and maintain oil production facilities and harbors using unmanned underwater vehicles (UUVs), submarines, ships, buoys, and divers. However, the present technology of underwater acoustic communication cannot provide the high data rate required to investigate and monitor these environments and facilities. Optical wireless communication has been proposed as the best alternative to meet this challenge. Models are presented for three kinds of optical wireless communication links: (a) a line-of-sight link, (b) a modulating retroreflector link, and (c) a reflective link, all of which can provide the required data rate. We analyze the link performance based on these models. From the analysis, it is clear that as the water absorption increases, the communication performance decreases dramatically for the three link types. However, by using the scattered light it was possible to mitigate this decrease in some cases. It is concluded from the analysis that a high-data-rate underwater optical wireless network is a feasible solution for emerging applications such as UUV-to-UUV links and networks of sensors, and extended ranges in these applications could be achieved by applying a multi-hop concept.

  14. Development and application of underwater robot vehicle for close inspection of spent fuels

    International Nuclear Information System (INIS)

    Yun, J. S.; Park, B. S.; Song, T. G.; Kim, S. H.; Cho, M. W.; Ahn, S. H.; Lee, J. Y.; Oh, S. C.; Oh, W. J.; Shin, K. W.; Woo, D. H.; Kim, H. G.; Park, J. S.

    1999-12-01

    The research and development efforts of the underwater robotic vehicle for inspection of spent fuels are focused on the development of an robotic vehicle which inspects spent fuels in the storage pool through remotely controlled actuation. For this purpose, a self balanced vehicle actuated by propellers is designed and fabricated, which consists of a radiation resistance camera, two illuminators, a pressure transducer and a manipulator. the algorithm for autonomous navigation is developed and its performance is tested at the swimming pool. The results of the underwater vehicle shows that the vehicle can easily navigate into the arbitrary directions while maintaining its balanced position. The camera provides a clear view of working environment by using the macro and zoom functions. The camera tilt device provides a wide field of view which is enough for monitoring the operation of manipulator. Also, the manipulator can pick up the dropped objects up to 4 kgf of weight. (author)

  15. Architecture of PAU survey camera readout electronics

    Science.gov (United States)

    Castilla, Javier; Cardiel-Sas, Laia; De Vicente, Juan; Illa, Joseph; Jimenez, Jorge; Maiorino, Marino; Martinez, Gustavo

    2012-07-01

    PAUCam is a new camera for studying the physics of the accelerating universe. The camera will consist of eighteen 2Kx4K HPK CCDs: sixteen for science and two for guiding. The camera will be installed at the prime focus of the WHT (William Herschel Telescope). In this contribution, the architecture of the readout electronics system is presented. Back- End and Front-End electronics are described. Back-End consists of clock, bias and video processing boards, mounted on Monsoon crates. The Front-End is based on patch panel boards. These boards are plugged outside the camera feed-through panel for signal distribution. Inside the camera, individual preamplifier boards plus kapton cable completes the path to connect to each CCD. The overall signal distribution and grounding scheme is shown in this paper.

  16. Underwater cutting techniques developments

    International Nuclear Information System (INIS)

    Bach, F.-W.

    1990-01-01

    The primary circuit structures of different nuclear powerplants are constructed out of stainless steels, ferritic steels, plated ferritic steels and alloys of aluminium. According to the level of the specific radiation of these structures, it is necessary for dismantling to work with remote controlled cutting techniques. The most successful way to protect the working crew against exposure of radiation is to operate underwater in different depths. The following thermal cutting processes are more or less developed to work under water: For ferritic steels only - flame cutting; For ferritic steels, stainless steels, cladded steels and aluminium alloys - oxy-arc-cutting, arc-waterjet-cutting with a consumable electrode, arc-saw-cutting, plasma-arc-cutting and plasma-arc-saw. The flame cutting is a burning process, all the other processes are melt-cutting processes. This paper explains the different techniques, giving a short introduction of the theory, a discussion of the possibilities with the advantages and disadvantages of these processes giving a view into the further research work in this interesting field. (author)

  17. Using a laser scanning camera for reactor inspection

    International Nuclear Information System (INIS)

    Armour, I.A.; Adrain, R.S.; Klewe, R.C.

    1984-01-01

    Inspection of nuclear reactors is normally carried out using TV or film cameras. There are, however, several areas where these cameras show considerable shortcomings. To overcome these difficulties, laser scanning cameras have been developed. This type of camera can be used for general visual inspection as well as the provision of high resolution video images with high ratio on and off-axis zoom capability. In this paper, we outline the construction and operation of a laser scanning camera and give examples of how it has been used in various power stations, and indicate future potential developments. (author)

  18. State of the art in video system performance

    Science.gov (United States)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  19. Morphology, structure, composition and build-up processes of the active channel-mouth lobe complex of the Congo deep-sea fan with inputs from remotely operated underwater vehicle (ROV) multibeam and video surveys

    Science.gov (United States)

    Dennielou, Bernard; Droz, Laurence; Babonneau, Nathalie; Jacq, Céline; Bonnel, Cédric; Picot, Marie; Le Saout, Morgane; Saout, Yohan; Bez, Martine; Savoye, Bruno; Olu, Karine; Rabouille, Christophe

    2017-08-01

    The detailed structure and composition of turbiditic channel-mouth lobes is still largely unknown because they commonly lie at abyssal water depths, are very thin and are therefore beyond the resolution of hull-mound acoustic tools. The morphology, structure and composition of the Congo turbiditic channel-mouth lobe complex (90×40 km; 2525 km2) were investigated with hull-mounted swath bathymetry, air gun seismics, 3.5 kHz sub-bottom profiler, sediment piston cores and also with high-resolution multibeam bathymetry and video acquired with a Remote Operating Vehicle (ROV). The lobe complex lies 760 km off the Congo River mouth in the Angola abyssal plain between 4740 and 5030 m deep. It is active and is fed by turbidity currents that deposit several centimetres of sediment per century. The lobe complex is subdivided into five lobes that have prograded. The lobes are dominantly muddy. Sand represents ca. 13% of the deposits and is restricted to the feeding channel and distributaries. The overall lobe body is composed of thin muddy to silty turbidites. The whole lobe complex is characterized by in situ mass wasting (slumps, debrites). The 1-m-resolution bathymetry shows pervasive slidings and block avalanches on the edges of the feeding channel and the channel mouth indicating that sliding occurs early and continuously in the lobe build-up. Mass wasting is interpreted as a consequence of very-high accumulation rates, over-steepening and erosion along the channels and is therefore an intrinsic process of lobe building. The bifurcation of feeding channels is probably triggered when the gradient in the distributaries at the top of a lobe becomes flat and when turbidity currents find their way on the higher gradient on the lobe side. It may also be triggered by mass wasting on the lobe side. When a new lobe develops, the abandoned lobes continue to collect significant turbiditic deposits from the feeding channel spillover, so that the whole lobe complex remains active. A

  20. FPS camera sync and reset chassis

    International Nuclear Information System (INIS)

    Yates, G.J.

    1980-06-01

    The sync and reset chassis provides all the circuitry required to synchronize an event to be studied, a remote free-running focus projection and scanning (FPS) data-acquisition TV camera, and a video signal recording system. The functions, design, and operation of this chassis are described in detail

  1. The Future of Video

    OpenAIRE

    Li, F.

    2016-01-01

    Executive Summary \\ud \\ud A range of technological innovations (e.g. smart phones and digital cameras), infrastructural advances (e.g. broadband and 3G/4G wireless networks) and platform developments (e.g. YouTube, Facebook, Snapchat, Instagram, Amazon, and Netflix) are collectively transforming the way video is produced, distributed, consumed, archived – and importantly, monetised. Changes have been observed well beyond the mainstream TV and film industries, and these changes are increasingl...

  2. Endoscopic Camera Control by Head Movements for Thoracic Surgery

    NARCIS (Netherlands)

    Reilink, Rob; de Bruin, Gart; Franken, M.C.J.; Mariani, Massimo A.; Misra, Sarthak; Stramigioli, Stefano

    2010-01-01

    In current video-assisted thoracic surgery, the endoscopic camera is operated by an assistant of the surgeon, which has several disadvantages. This paper describes a system which enables the surgeon to control the endoscopic camera without the help of an assistant. The system is controlled using

  3. Development of camera technology for monitoring nests. Chapter 15

    Science.gov (United States)

    W. Andrew Cox; M. Shane Pruett; Thomas J. Benson; Scott J. Chiavacci; Frank R., III Thompson

    2012-01-01

    Photo and video technology has become increasingly useful in the study of avian nesting ecology. However, researchers interested in using camera systems are often faced with insufficient information on the types and relative advantages of available technologies. We reviewed the literature for studies of nests that used cameras and summarized them based on study...

  4. Design of Autonomous Underwater Vehicle

    Directory of Open Access Journals (Sweden)

    Tadahiro Hyakudome

    2011-03-01

    Full Text Available There are concerns about the impact that global warming will have on our environment, and which will inevitably result in expanding deserts and rising water levels. While a lot of underwater vehicles are utilized, AUVs (Autonomous Underwater Vehicle were considered and chosen, as the most suitable tool for conduction survey concerning these global environmental problems. AUVs can comprehensive survey because the vehicle does not have to be connected to the support vessel by tether cable. When such underwater vehicles are made, it is necessary to consider about the following things. 1 Seawater and Water Pressure Environment, 2 Sink, 3 There are no Gas or Battery Charge Stations, 4 Global Positioning System cannot use, 5 Radio waves cannot use. In the paper, outline of above and how deal about it are explained.

  5. Habitat Mapping Camera (HABCAM)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset entails imagery collected using the HabCam towed underwater vehicle and annotated data on objects or habitats in the images and notes on image...

  6. On the Accuracy Potential in Underwater/Multimedia Photogrammetry.

    Science.gov (United States)

    Maas, Hans-Gerd

    2015-07-24

    Underwater applications of photogrammetric measurement techniques usually need to deal with multimedia photogrammetry aspects, which are characterized by the necessity of handling optical rays that are refracted at interfaces between optical media with different refractive indices according to Snell's Law. This so-called multimedia geometry has to be incorporated into geometric models in order to achieve correct measurement results. The paper shows a flexible yet strict geometric model for the handling of refraction effects on the optical path, which can be implemented as a module into photogrammetric standard tools such as spatial resection, spatial intersection, bundle adjustment or epipolar line computation. The module is especially well suited for applications, where an object in water is observed by cameras in air through one or more planar glass interfaces, as it allows for some simplifications here. In the second part of the paper, several aspects, which are relevant for an assessment of the accuracy potential in underwater/multimedia photogrammetry, are discussed. These aspects include network geometry and interface planarity issues as well as effects caused by refractive index variations and dispersion and diffusion under water. All these factors contribute to a rather significant degradation of the geometric accuracy potential in underwater/multimedia photogrammetry. In practical experiments, a degradation of the quality of results by a factor two could be determined under relatively favorable conditions.

  7. Validation of Underwater Sensor Package Using Feature Based SLAM

    Directory of Open Access Journals (Sweden)

    Christopher Cain

    2016-03-01

    Full Text Available Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package.

  8. Validation of Underwater Sensor Package Using Feature Based SLAM.

    Science.gov (United States)

    Cain, Christopher; Leonessa, Alexander

    2016-03-17

    Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package.

  9. Validation of Underwater Sensor Package Using Feature Based SLAM

    Science.gov (United States)

    Cain, Christopher; Leonessa, Alexander

    2016-01-01

    Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package. PMID:26999142

  10. Underwater laser imaging system (UWLIS)

    Energy Technology Data Exchange (ETDEWEB)

    DeLong, M. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Practical limitations with underwater imaging systems area reached when the noise in the back scattered radiation generated in the water between the imaging system and the target obscures the spatial contrast and resolution necessary for target discovery and identification. The advent of high power lasers operating in the blue-green portion of the visible spectrum (oceanic transmission window) has led to improved experimental illumination systems for underwater imaging. Range-gated and synchronously scanned devices take advantage of the unique temporal and spatial coherence properties of laser radiation, respectively, to overcome the deleterious effects of common volume back scatter.

  11. Underwater measurements of muon intensity

    Science.gov (United States)

    Fedorov, V. M.; Pustovetov, V. P.; Trubkin, Y. A.; Kirilenkov, A. V.

    1985-01-01

    Experimental measurements of cosmic ray muon intensity deep underwater aimed at determining a muon absorption curve are of considerable interest, as they allow to reproduce independently the muon energy spectrum at sea level. The comparison of the muon absorption curve in sea water with that in rock makes it possible to determine muon energy losses caused by nuclear interactions. The data available on muon absorption in water and that in rock are not equivalent. Underground measurements are numerous and have been carried out down to the depth of approx. 15km w.e., whereas underwater muon intensity have been measured twice and only down to approx. 3km deep.

  12. Close-Range Tracking of Underwater Vehicles Using Light Beacons

    Science.gov (United States)

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Istenič, Klemen; Ribas, David

    2016-01-01

    This paper presents a new tracking system for autonomous underwater vehicles (AUVs) navigating in a close formation, based on computer vision and the use of active light markers. While acoustic localization can be very effective from medium to long distances, it is not so advantageous in short distances when the safety of the vehicles requires higher accuracy and update rates. The proposed system allows the estimation of the pose of a target vehicle at short ranges, with high accuracy and execution speed. To extend the field of view, an omnidirectional camera is used. This camera provides a full coverage of the lower hemisphere and enables the concurrent tracking of multiple vehicles in different positions. The system was evaluated in real sea conditions by tracking vehicles in mapping missions, where it demonstrated robust operation during extended periods of time. PMID:27023547

  13. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  14. Calibration of Underwater Sound Transducers

    OpenAIRE

    H.R.S. Sastry

    1983-01-01

    The techniques of calibration of underwater sound transducers for farfield, near-field and closed environment conditions are reviewed in this paper .The design of acoustic calibration tank is mentioned. The facilities available at Naval Physical & Oceanographic Laboratory, Cochin for calibration of transducers are also listed.

  15. Underwater nuclear power plant structure

    International Nuclear Information System (INIS)

    Severs, S.; Toll, H.V.

    1982-01-01

    A structure for an underwater nuclear power generating plant comprising a triangular platform formed of tubular leg and truss members upon which are attached one or more large spherical pressure vessels and one or more small cylindrical auxiliary pressure vessels. (author)

  16. Underwater Robots Surface in Utah

    Science.gov (United States)

    Hurd, Randy C.; Hacking, Kip S.; Damarjian, Jennifer L.; Wright, Geoffrey A.; Truscott, Tadd

    2015-01-01

    Underwater robots (or ROVs: Remotely Operated Vehicles as they are typically called in industry) have recently become a very popular instructional STEM activity. Nationally, ROVs have been used in science and technology classrooms for several years in cities such as Seattle, San Diego, Virginia Beach, and other coastal areas. In the past two…

  17. Video Coding Technique using MPEG Compression Standards

    African Journals Online (AJOL)

    Akorede

    multimedia communication and storage. Technologies such as mobile video, mobile TV, DTV, DVD players, digital cameras, video telephony and multimedia messaging have all been enabled through the availability of efficient compression algorithms. Photographs, printed text, and other hard copy media are now routinely ...

  18. Video Conferencing for a Virtual Seminar Room

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Fosgerau, A.; Hansen, Peter Søren K.

    2002-01-01

    A PC-based video conferencing system for a virtual seminar room is presented. The platform is enhanced with DSPs for audio and video coding and processing. A microphone array is used to facilitate audio based speaker tracking, which is used for adaptive beam-forming and automatic camera-control...

  19. Video systems for alarm assessment

    International Nuclear Information System (INIS)

    Greenwoll, D.A.; Matter, J.C.; Ebel, P.E.

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs

  20. An Evaluation of Video-to-Video Face Verification

    NARCIS (Netherlands)

    Poh, N.; Chan, C.H.; Kittler, J.; Marcel, S.; Mc Cool, C.; Argones Rúa, E.; Alba Castro, J.L.; Villegas, M.; Paredes, R.; Štruc, V.; Pavešić, N.; Salah, A.A.; Fang, H.; Costen, N.

    2010-01-01

    Person recognition using facial features, e.g., mug-shot images, has long been used in identity documents. However, due to the widespread use of web-cams and mobile devices embedded with a camera, it is now possible to realize facial video recognition, rather than resorting to just still images. In

  1. 20-meter underwater wireless optical communication link with 1.5 Gbps data rate.

    Science.gov (United States)

    Shen, Chao; Guo, Yujian; Oubei, Hassan M; Ng, Tien Khee; Liu, Guangyu; Park, Ki-Hong; Ho, Kang-Ting; Alouini, Mohamed-Slim; Ooi, Boon S

    2016-10-31

    The video streaming, data transmission, and remote control in underwater call for high speed (Gbps) communication link with a long channel length (~10 meters). We present a compact and low power consumption underwater wireless optical communication (UWOC) system utilizing a 450-nm laser diode (LD) and a Si avalanche photodetector. With the LD operating at a driving current of 80 mA with an optical power of 51.3 mW, we demonstrated a high-speed UWOC link offering a data rate up to 2 Gbps over a 12-meter-long, and 1.5 Gbps over a record 20-meter-long underwater channel. The measured bit-error rate (BER) are 2.8 × 10-5, and 3.0 × 10-3, respectively, which pass well the forward error correction (FEC) criterion.

  2. 20-meter underwater wireless optical communication link with 15 Gbps data rate

    KAUST Repository

    Shen, Chao

    2016-10-24

    The video streaming, data transmission, and remote control in underwater call for high speed (Gbps) communication link with a long channel length (∼10 meters). We present a compact and low power consumption underwater wireless optical communication (UWOC) system utilizing a 450-nm laser diode (LD) and a Si avalanche photodetector. With the LD operating at a driving current of 80 mA with an optical power of 51.3 mW, we demonstrated a high-speed UWOC link offering a data rate up to 2 Gbps over a 12-meter-long, and 1.5 Gbps over a record 20-meter-long underwater channel. The measured bit-error rate (BER) are 2.8 × 10-5, and 3.0 × 10-3, respectively, which pass well the forward error correction (FEC) criterion. © 2016 Optical Society of America.

  3. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  4. Video essay

    DEFF Research Database (Denmark)

    2015-01-01

    Camera movement has a profound influence on the way films look and the way films are experienced by spectators. In this visual essay Jakob Isak Nielsen proposes six major functions of camera movement in narrative cinema. Individual camera movements may serve more of these functions at the same time...

  5. Underwater Hyperspectral Imaging (UHI) for Assessing the Coverage of Drill Cuttings on Benthic Habitats

    Science.gov (United States)

    Erdal, I.; Sandvik Aas, L. M.; Cochrane, S.; Ekehaug, S.; Hansen, I. M.

    2016-02-01

    Larger-scale mapping of seabed areas requires improved methods in order to obtain effective and sound marine management. The state of the art for visual surveys today involves video transects, which is a proven, yet time consuming and subjective method. Underwater hyperspectral imaging (UHI) utilizes high color sensitive information in the visible light reflected from objects on the seafloor to automatically identify seabed organisms and other objects of interest (OOI). A spectral library containing optical fingerprints of a range of OOI's are used in the classification. The UHI is a push-broom hyperspectral camera utilizing a state of the art CMOS sensor ensuring high sensitivity and low noise levels. Dedicated lamps illuminate the imaging area of the seafloor. Specialized software is used both for processing raw data and for geo-localization and OOI identification. The processed hyperspectral image are used as a reference when extracting new spectral data for OOI's to the spectral library. By using the spectral library in classification algorithms, large sea floor areas can automatically be classified. Recent advantages in UHI classification includes mapping of areas affected by drill cuttings. Tools for automated classification of seabed that have a different bottom composition than adjacent baseline areas are under development. Tests have been applied to a transect in gradient from the drilling hole to baseline seabed. Some areas along the transect were identified as different compared to baseline seabed. The finding was supported by results from traditional seabed mapping methods. We propose that this can be a useful tool for tomorrows environmental mapping and monitoring of drill sites.

  6. Effectiveness of an Automatic Tracking Software in Underwater Motion Analysis

    Directory of Open Access Journals (Sweden)

    Fabrício A. Magalhaes

    2013-12-01

    Full Text Available Tracking of markers placed on anatomical landmarks is a common practice in sports science to perform the kinematic analysis that interests both athletes and coaches. Although different software programs have been developed to automatically track markers and/or features, none of them was specifically designed to analyze underwater motion. Hence, this study aimed to evaluate the effectiveness of a software developed for automatic tracking of underwater movements (DVP, based on the Kanade-Lucas-Tomasi feature tracker. Twenty-one video recordings of different aquatic exercises (n = 2940 markers’ positions were manually tracked to determine the markers’ center coordinates. Then, the videos were automatically tracked using DVP and a commercially available software (COM. Since tracking techniques may produce false targets, an operator was instructed to stop the automatic procedure and to correct the position of the cursor when the distance between the calculated marker’s coordinate and the reference one was higher than 4 pixels. The proportion of manual interventions required by the software was used as a measure of the degree of automation. Overall, manual interventions were 10.4% lower for DVP (7.4% than for COM (17.8%. Moreover, when examining the different exercise modes separately, the percentage of manual interventions was 5.6% to 29.3% lower for DVP than for COM. Similar results were observed when analyzing the type of marker rather than the type of exercise, with 9.9% less manual interventions for DVP than for COM. In conclusion, based on these results, the developed automatic tracking software presented can be used as a valid and useful tool for underwater motion analysis.

  7. Archaeology Hijacked: Addressing the Historical Misappropriations of Maritime and Underwater Archaeology

    Science.gov (United States)

    Gately, Iain; Benjamin, Jonathan

    2017-09-01

    As a discipline that has grown up in the eyes of the camera, maritime and underwater archaeology has struggled historically to distinguish itself from early misrepresentations of it as adventure-seeking, treasure hunting and underwater salvage as popularized in the 1950s and 1960s. Though many professional archaeologists have successfully moved forward from this history through broader theoretical engagement and the development of the discipline within anthropology, public perception of archaeology under water has not advanced in stride. Central to this issue is the portrayal of underwater archaeology within popular culture and the representational structures from the 1950s and 1960s persistently used to introduce the profession to the public, through the consumption of popular books and especially television. This article explores representations of maritime and underwater archaeology to examine how the discipline has been consumed by the public, both methodologically and theoretically, through media. In order to interrogate this, we first examine maritime and underwater archaeology as a combined sub-discipline of archaeology and consider how it has been defined historically and in contemporary professional practice. Finally, we consider how practitioners can take a proactive approach to portray their work and convey archaeological media to the public. In this respect, we aim to advance the theoretical discussion in a way so as to reduce further cases whereby archaeology is accidentally misappropriated or deliberately hijacked.

  8. Intelligent Model for Video Survillance Security System

    Directory of Open Access Journals (Sweden)

    J. Vidhya

    2013-12-01

    Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.

  9. Innovative Video Diagnostic Equipment for Material Science

    Science.gov (United States)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  10. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  11. Design and Fabrication of Nereid-UI: A Remotely Operated Underwater Vehicle for Oceanographic Access Under Ice

    Science.gov (United States)

    Whitcomb, L. L.; Bowen, A. D.; Yoerger, D.; German, C. R.; Kinsey, J. C.; Mayer, L. A.; Jakuba, M. V.; Gomez-Ibanez, D.; Taylor, C. L.; Machado, C.; Howland, J. C.; Kaiser, C. L.; Heintz, M.; Pontbriand, C.; Suman, S.; O'hara, L.

    2013-12-01

    The Woods Hole Oceanographic Institution and collaborators from the Johns Hopkins University and the University of New Hampshire are developing for the Polar Science Community a remotely-controlled underwater robotic vehicle capable of being tele-operated under ice under remote real-time human supervision. The Nereid Under-Ice (Nereid-UI) vehicle will enable exploration and detailed examination of biological and physical environments at glacial ice-tongues and ice-shelf margins, delivering high-definition video in addition to survey data from on board acoustic, chemical, and biological sensors. Preliminary propulsion system testing indicates the vehicle will be able to attain standoff distances of up to 20 km from an ice-edge boundary, as dictated by the current maximum tether length. The goal of the Nereid-UI system is to provide scientific access to under-ice and ice-margin environments that is presently impractical or infeasible. FIBER-OPTIC TETHER: The heart of the Nereid-UI system is its expendable fiber optic telemetry system. The telemetry system utilizes many of the same components pioneered for the full-ocean depth capable HROV Nereus vehicle, with the addition of continuous fiber status monitoring, and new float-pack and depressor designs that enable single-body deployment. POWER SYSTEM: Nereid-UI is powered by a pressure-tolerant lithium-ion battery system composed of 30 Ah prismatic pouch cells, arranged on a 90 volt bus and capable of delivering 15 kW. The cells are contained in modules of 8 cells, and groups of 9 modules are housed together in oil-filled plastic boxes. The power distribution system uses pressure tolerant components extensively, each of which have been individually qualified to 10 kpsi and operation between -20 C and 40 C. THRUSTERS: Nereid-UI will employ eight identical WHOI-designed thrusters, each with a frameless motor, oil-filled and individually compensated, and designed for low-speed (500 rpm max) direct drive. We expect an end

  12. Teacher Self-Captured Video: Learning to See

    Science.gov (United States)

    Sherin, Miriam Gamoran; Dyer, Elizabeth B.

    2017-01-01

    Videos are often used for demonstration and evaluation, but a more productive approach would be using video to support teachers' ability to notice and interpret classroom interactions. That requires thinking carefully about the physical aspects of shooting video--where the camera is placed and how easily student interactions can be heard--as well…

  13. Cooperative Rendezvous and Docking for Underwater Robots Using Model Predictive Control and Dual Decomposition

    DEFF Research Database (Denmark)

    Nielsen, Mikkel Cornelius; Johansen, Tor Arne; Blanke, Mogens

    2018-01-01

    This paper considers the problem of rendezvous and docking with visual constraints in the context of underwater robots with camera-based navigation. The objective is the convergence of the vehicles to a common point while maintaining visual contact. The proposed solution includes the design of a ...... of a distributed model predictive controller based on dual decomposition, which allows for optimization in a decentralized fashion. The proposed distributed controller enables rendezvous and docking between vehicles while maintaining visual contact....

  14. Video Analytics for Business Intelligence

    CERN Document Server

    Porikli, Fatih; Xiang, Tao; Gong, Shaogang

    2012-01-01

    Closed Circuit TeleVision (CCTV) cameras have been increasingly deployed pervasively in public spaces including retail centres and shopping malls. Intelligent video analytics aims to automatically analyze content of massive amount of public space video data and has been one of the most active areas of computer vision research in the last two decades. Current focus of video analytics research has been largely on detecting alarm events and abnormal behaviours for public safety and security applications. However, increasingly CCTV installations have also been exploited for gathering and analyzing business intelligence information, in order to enhance marketing and operational efficiency. For example, in retail environments, surveillance cameras can be utilised to collect statistical information about shopping behaviour and preference for marketing (e.g., how many people entered a shop; how many females/males or which age groups of people showed interests to a particular product; how long did they stay in the sho...

  15. Diversity-Aware Multi-Video Summarization.

    Science.gov (United States)

    Panda, Rameswar; Mithun, Niluthpol Chowdhury; Roy-Chowdhury, Amit K

    2017-10-01

    Most video summarization approaches have focused on extracting a summary from a single video; we propose an unsupervised framework for summarizing a collection of videos. We observe that each video in the collection may contain some information that other videos do not have, and thus exploring the underlying complementarity could be beneficial in creating a diverse informative summary. We develop a novel diversity-aware sparse optimization method for multi-video summarization by exploring the complementarity within the videos. Our approach extracts a multi-video summary, which is both interesting and representative in describing the whole video collection. To efficiently solve our optimization problem, we develop an alternating minimization algorithm that minimizes the overall objective function with respect to one video at a time while fixing the other videos. Moreover, we introduce a new benchmark data set, Tour20, that contains 140 videos with multiple manually created summaries, which were acquired in a controlled experiment. Finally, by extensive experiments on the new Tour20 data set and several other multi-view data sets, we show that the proposed approach clearly outperforms the state-of-the-art methods on the two problems-topic-oriented video summarization and multi-view video summarization in a camera network.

  16. Range-Measuring Video Sensors

    Science.gov (United States)

    Howard, Richard T.; Briscoe, Jeri M.; Corder, Eric L.; Broderick, David

    2006-01-01

    Optoelectronic sensors of a proposed type would perform the functions of both electronic cameras and triangulation- type laser range finders. That is to say, these sensors would both (1) generate ordinary video or snapshot digital images and (2) measure the distances to selected spots in the images. These sensors would be well suited to use on robots that are required to measure distances to targets in their work spaces. In addition, these sensors could be used for all the purposes for which electronic cameras have been used heretofore. The simplest sensor of this type, illustrated schematically in the upper part of the figure, would include a laser, an electronic camera (either video or snapshot), a frame-grabber/image-capturing circuit, an image-data-storage memory circuit, and an image-data processor. There would be no moving parts. The laser would be positioned at a lateral distance d to one side of the camera and would be aimed parallel to the optical axis of the camera. When the range of a target in the field of view of the camera was required, the laser would be turned on and an image of the target would be stored and preprocessed to locate the angle (a) between the optical axis and the line of sight to the centroid of the laser spot.

  17. Network Computing for Distributed Underwater Acoustic Sensors

    Science.gov (United States)

    2014-03-31

    Physical layer in UASNs Our main investigations are about underwater communications using acoustic waves. Elec- tromagnetic and optical waves do not...Shengli, Z., and Jun-Hong, C. (2008), Prospects and problems of wireless communication for underwater sensor networks, Wirel. Commun . Mob. Comput., 8(8... Wireless Communications , 9(9), 2934–2944. [21] Pompili, D. and Akyildiz, I. (2010), A multimedia cross-layer protocol for underwater acoustic sensor networks

  18. Cooperative OFDM underwater acoustic communications

    CERN Document Server

    Cheng, Xilin; Cheng, Xiang

    2016-01-01

    Following underwater acoustic channel modeling, this book investigates the relationship between coherence time and transmission distances. It considers the power allocation issues of two typical transmission scenarios, namely short-range transmission and medium-long range transmission. For the former scenario, an adaptive system is developed based on instantaneous channel state information. The primary focus is on cooperative dual-hop orthogonal frequency division multiplexing (OFDM). This book includes the decomposed fountain codes designed to enable reliable communications with higher energy efficiency. It covers the Doppler Effect, which improves packet transmission reliability for effective low-complexity mirror-mapping-based intercarrier interference cancellation schemes capable of suppressing the intercarrier interference power level. Designed for professionals and researchers in the field of underwater acoustic communications, this book is also suitable for advanced-level students in electrical enginee...

  19. International Conference on Underwater Environment

    CERN Document Server

    Jaulin, Luc; Creuze, Vincent; Debese, Nathalie; Quidu, Isabelle; Clement, Benoît; Billon-Coat, Annick

    2016-01-01

    This volume constitutes the results of the International Conference on Underwater Environment, MOQESM’14, held at “Le Quartz” Conference Center in Brest, France, on October 14-15, 2014, within the framework of the 9th Sea Tech Week, International Marine Science and Technology Event. The objective of MOQESM'14 was to bring together researchers from both academia and industry, interested in marine robotics and hydrography with application to the coastal environment mapping and underwater infrastructures surveys. The common thread of the conference is the combination of technical control, perception, and localization, typically used in robotics, with the methods of mapping and bathymetry. The papers presented in this book focus on two main topics. Firstly, coastal and infrastructure mapping is addressed, focusing not only on hydrographic systems, but also on positioning systems, bathymetry, and remote sensing. The proposed methods rely on acoustic sensors such as side scan sonars, multibeam echo sounders, ...

  20. Those Nifty Digital Cameras!

    Science.gov (United States)

    Ekhaml, Leticia

    1996-01-01

    Describes digital photography--an electronic imaging technology that merges computer capabilities with traditional photography--and its uses in education. Discusses how a filmless camera works, types of filmless cameras, advantages and disadvantages, and educational applications of the consumer digital cameras. (AEF)

  1. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen...

  2. Tracking the position of the underwater robot for nuclear reactor inspection

    International Nuclear Information System (INIS)

    Jeo, J. W.; Kim, C. H.; Seo, Y. C.; Choi, Y. S.; Kim, S. H.

    2003-01-01

    The tracking procedure of the underwater mobile robot moving and submerging ahead to nuclear reactor vessel for visual inspection, which is required to find the foreign objects such as loose parts, is described. The yellowish underwater robot body tends to present a big contrast to boron solute cold water of nuclear reactor vessel, tinged with indigo by the Cerenkov effect. In this paper, we have found and tracked the positions of underwater mobile robot using the two color information, yellow and indigo. From the horizontal and vertical profiles analysis of the color image, the blue, green, and the gray component have the inferior signal-to-noise characteristics compared to the red component. The center coordinates extraction procedures areas follows. The first step is to segment the underwater robot body to cold water with indigo background. From the RGB color components of the entire monitoring image taken with the color CCD camera, we have selected the red color component. In the selected red image, we extracted the positions of the underwater mobile robot using the following process sequences; binarization, labelling, and centroid extraction techniques. In the experiment carried out at the Youngkwang unit 5 nuclear reactor vessel, we have tracked the center positions of the underwater robot submerged near the cold leg and the hot leg way, which is fathomed to 10m deep in depth. When the position of the robot vehicle fluctuates between the previous and the current image frame due to the flickering noise and light source, installed temporally in the bottom of the reactor vessel, we adaptively adjusted the ROI window. Adding the ROI windows of the previous frame to the current frame, and then setting up the ROI window of the next image frame, we can robustly track the positions of the underwater robot and control the target position's divergence. From these facts, we can conclude that using the red component from color camera is more efficient tracking method

  3. 3D MODELS COMPARISON OF COMPLEX SHELL IN UNDERWATER AND DRY ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    S. Troisi

    2015-04-01

    Full Text Available In marine biology the shape, morphology, texture and dimensions of the shells and organisms like sponges and gorgonians are very important parameters. For example, a particular type of gorgonian grows every year only few millimeters; this estimation was conducted without any measurement instrument but it has been provided after successive observational studies, because this organism is very fragile: the contact could compromise its structure and outliving. Non-contact measurement system has to be used to preserve such organisms: the photogrammetry is a method capable to assure high accuracy without contact. Nevertheless, the achievement of a 3D photogrammetric model of complex object (as gorgonians or particular shells is a challenge in normal environments, either with metric camera or with consumer camera. Indeed, the successful of automatic target-less image orientation and the image matching algorithms is strictly correlated to the object texture properties and of camera calibration quality as well. In the underwater scenario, the environment conditions strongly influence the results quality; in particular, water’s turbidity, the presence of suspension, flare and other optical aberrations decrease the image quality reducing the accuracy and increasing the noise on the 3D model. Furthermore, seawater density variability influences its refraction index and consequently the interior orientation camera parameters. For this reason, the camera calibration has to be performed in the same survey conditions. In this paper, a comparison between the 3D models of a Charonia Tritonis shell are carried out through surveys conducted both in dry and underwater environments.

  4. Underwater Coatings for Contamination Control

    International Nuclear Information System (INIS)

    Julia L. Tripp; Kip Archibald; Ann-Marie Phillips; Joseph Campbell

    2004-01-01

    The Idaho National Engineering and Environmental Laboratory (INEEL) is deactivating several fuel storage basins. Airborne contamination is a concern when the sides of the basins are exposed and allowed to dry during water removal. One way of controlling this airborne contamination is to fix the contamination in place while the pool walls are still submerged. There are many underwater coatings available on the market that are used in marine, naval and other applications. A series of tests were run to determine whether the candidate underwater fixatives are easily applied and adhere well to the substrates (pool wall materials) found in INEEL fuel pools. The four pools considered included (1) Test Area North (TAN-607) with epoxy painted concrete walls; (2) Idaho Nuclear Technology and Engineering Center (INTEC) (CPP-603) with bare concrete walls; (3) Materials Test Reactor (MTR) Canal with stainless steel lined concrete walls; and (4) Power Burst Facility (PBF-620) with stainless steel lined concrete walls on the bottom and epoxy painted carbon steel lined walls on the upper portions. Therefore, the four materials chosen for testing included bare concrete, epoxy painted concrete, epoxy painted carbon steel, and stainless steel. The typical water temperature of the pools varies from 55 F to 80 F dependent on the pool and the season. These tests were done at room temperature. The following criteria were used during this evaluation. The underwater coating must: (1) Be easy to apply; (2) Adhere well to the four surfaces of interest; (3) Not change or have a negative impact on water chemistry or clarity; (4) Not be hazardous in final applied form; and (5) Be proven in other underwater applications. In addition, it is desirable for the coating to have a high pigment or high cross-link density to prevent radiation from penetrating. This paper will detail the testing completed and the test results. A proprietary two-part, underwater epoxy owned by S. G. Pinney and Associates

  5. vid113_0401r -- Video groundtruthing collected from RV Tatoosh during August 2005.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  6. A Study towards Real Time Camera Calibration

    OpenAIRE

    Choudhury, Ragini

    2000-01-01

    Preliminary Report Prepared for the Project VISTEO; This report provides a detailed study of the problem of real time camera calibration. This analysis, based on the study of literature in the area, as well as the experiments carried out on real and synthetic data, is motivated by the requirements of the VISTEO project. VISTEO deals with a fusion of real images and synthetic environments, objects etc in TV video sequences. It thus deals with a challenging and fast growing area in virtual real...

  7. The Use of Camera Traps in Wildlife

    Directory of Open Access Journals (Sweden)

    Yasin Uçarlı

    2013-11-01

    Full Text Available Camera traps are increasingly used in the abundance and density estimates of wildlife species. Camera traps are very good alternative for direct observation in case, particularly, steep terrain, dense vegetation covered areas or nocturnal species. The main reason for the use of camera traps is eliminated that the economic, personnel and time loss in a continuous manner at the same time in different points. Camera traps, motion and heat sensitive, can take a photo or video according to the models. Crossover points and feeding or mating areas of the focal species are addressed as a priority camera trap set locations. The population size can be finding out by the images combined with Capture-Recapture methods. The population density came out the population size divided to effective sampling area size. Mating and breeding season, habitat choice, group structures and survival rates of the focal species can be achieved from the images. Camera traps are very useful to obtain the necessary data about the particularly mysterious species with economically in planning and conservation efforts.

  8. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  9. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  10. Imagery-derived modulation transfer function and its applications for underwater imaging

    Science.gov (United States)

    Hou, Weilin; Weidemann, Alan D.; Gray, Deric J.; Fournier, Georges R.

    2007-09-01

    The main challenge working with underwater imagery results from both rapid decay of signals due to absorption, which leads to poor signal to noise returns, and the blurring caused by strong scattering by the water itself and constituents within, especially particulates. The modulation transfer function (MTF) of an optical system gives the detailed and precise information regarding the system behavior. Underwater imageries can be better restored with the knowledge of the system MTF or the point spread function (PSF), the Fourier transformed equivalent, extending the performance range as well as the information retrieval from underwater electro-optical system. This is critical in many civilian and military applications, including target and especially mine detection, search and rescue, and diver visibility. This effort utilizes test imageries obtained by the Laser Underwater Camera Imaging Enhancer (LUCIE) from Defense Research and Development Canada (DRDC), during an April-May 2006 trial experiment in Panama City, Florida. Imaging of a standard resolution chart with various spatial frequencies were taken underwater in a controlled optical environment, at varying distances. In-water optical properties during the experiment were measured, which included the absorption and attenuation coefficients, particle size distribution, and volume scattering function. Resulting images were preprocessed to enhance signal to noise ratio by averaging multiple frames, and to remove uneven illumination at target plane. The MTF of the medium was then derived from measurement of above imageries, subtracting the effect of the camera system. PSFs converted from the measured MTF were then used to restore the blurred imageries by different deconvolution methods. The effects of polarization from source to receiver on resulting MTFs were examined and we demonstrate that matching polarizations do enhance system transfer functions. This approach also shows promise in deriving medium optical

  11. Reliability of Three-Dimensional Angular Kinematics and Kinetics of Swimming Derived from Digitized Video

    Directory of Open Access Journals (Sweden)

    Ross H. Sanders, Tomohiro Gonjo, Carla B. McCabe

    2016-03-01

    Full Text Available The purpose of this study was to explore the reliability of estimating three-dimensional (3D angular kinematics and kinetics of a swimmer derived from digitized video. Two high-level front crawl swimmers and one high level backstroke swimmer were recorded by four underwater and two above water video cameras. One of the front crawl swimmers was digitized at 50 fields per second with a window for smoothing by a 4th order Butterworth digital filter extending 10 fields beyond the start and finish of the stroke cycle (FC1, while the other front crawl (FC2 and backstroke (BS swimmer were digitized at 25 frames per second with the window extending five frames beyond the start and finish of the stroke cycle. Each camera view of one stroke cycle was digitized five times yielding five independent 3D data sets from which whole body centre of mass (CM yaw, pitch, roll, and torques were derived together with wrist and ankle moment arms with respect to an inertial reference system with origin at the CM. Coefficients of repeatability ranging from r = 0.93 to r = 0.99 indicated that both digitising sampling rates and extrapolation methods are sufficiently reliable to identify real differences in net torque production. This will enable the sources of rotations about the three axes to be explained in future research. Errors in angular kinematics and displacements of the wrist and ankles relative to range of motion were small for all but the ankles in the X (swimming direction for FC2 who had a very vigorous kick. To avoid large errors when digitising the ankles of swimmers with vigorous kicks it is recommended that a marker on the shank could be used to calculate the ankle position based on the known displacements between knee, shank, and ankle markers.

  12. Intelligent video surveillance systems and technology

    CERN Document Server

    Ma, Yunqian

    2009-01-01

    From the streets of London to subway stations in New York City, hundreds of thousands of surveillance cameras ubiquitously collect hundreds of thousands of videos, often running 24/7. How can such vast volumes of video data be stored, analyzed, indexed, and searched? How can advanced video analysis and systems autonomously recognize people and detect targeted activities real-time? Collating and presenting the latest information Intelligent Video Surveillance: Systems and Technology explores these issues, from fundamentals principle to algorithmic design and system implementation.An Integrated

  13. PEOPLE REIDENTIFCATION IN A DISTRIBUTED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    Icaro Oliveira de Oliveira

    2010-06-01

    Full Text Available This paper presents an approach to the object reidentification problem in a distributed camera network system. The reidentification or reacquisition problem consists essentially on the matching process of images acquired from different cameras. This work is applied in a monitored environment by cameras. This application is important to modern security systems, in which the targets presence identification in the environment expands the capacity of action by security agents in real time and provides important parameters like localization for each target. We used target’s interest points and target’s color with features for reidentification. The satisfactory results were obtained from real experiments in public video datasets and synthetic images with noise.

  14. TurtleCam: A “Smart” Autonomous Underwater Vehicle for Investigating Behaviors and Habitats of Sea Turtles

    Directory of Open Access Journals (Sweden)

    Kara L. Dodge

    2018-03-01

    Full Text Available Sea turtles inhabiting coastal environments routinely encounter anthropogenic hazards, including fisheries, vessel traffic, pollution, dredging, and drilling. To support mitigation of potential threats, it is important to understand fine-scale sea turtle behaviors in a variety of habitats. Recent advancements in autonomous underwater vehicles (AUVs now make it possible to directly observe and study the subsurface behaviors and habitats of marine megafauna, including sea turtles. Here, we describe a “smart” AUV capability developed to study free-swimming marine animals, and demonstrate the utility of this technology in a pilot study investigating the behaviors and habitat of leatherback turtles (Dermochelys coriacea. We used a Remote Environmental Monitoring UnitS (REMUS-100 AUV, designated “TurtleCam,” that was modified to locate, follow and film tagged turtles for up to 8 h while simultaneously collecting environmental data. The TurtleCam system consists of a 100-m depth rated vehicle outfitted with a circular Ultra-Short BaseLine receiver array for omni-directional tracking of a tagged animal via a custom transponder tag that we attached to the turtle with two suction cups. The AUV collects video with six high-definition cameras (five mounted in the vehicle nose and one mounted aft and we added a camera to the animal-borne transponder tag to record behavior from the turtle's perspective. Since behavior is likely a response to habitat factors, we collected concurrent in situ oceanographic data (bathymetry, temperature, salinity, chlorophyll-a, turbidity, currents along the turtle's track. We tested the TurtleCam system during 2016 and 2017 in a densely populated coastal region off Cape Cod, Massachusetts, USA, where foraging leatherbacks overlap with fixed fishing gear and concentrated commercial and recreational vessel traffic. Here we present example data from one leatherback turtle to demonstrate the utility of TurtleCam. The

  15. Application of an Underwater Robot in Reactor Coolant System

    International Nuclear Information System (INIS)

    Choi, Young-Soo; Jeong, Kyung-Min; Lee, Sung-Uk; Cho, Jai-Wan

    2006-01-01

    Nuclear energy is a major source of electric energy consumed in Korea. It has the advantage of other energy sources, nuclear energy is cost effective and little pollution. But the fearfulness of an accident and/or failure has scared us the utilization of nuclear energy extensively. So, the safety and reliability of nuclear power plants become more important. Inspection and maintenance of component should be achieved continuously. The RCS(reactor coolant system) of PWR(pressurized water reactor) has a role to cool down the reactor's temperature. Cooling water is injected through the SI(safety injection) nozzle into the cold leg of the primary loop. Thermal sleeves are attached inside the cylindrical SI nozzle to reduce the thermal shock of the cooling water to the weld zone of the safety injection nozzle. The human workers are susceptible to radiation exposure and manual handling machine is hard to access because of the complexity of the path. So, we developed and applied free running, tele-operated underwater vehicle to inspect SI nozzle close to the place. Tele-operated robot is useful to inspect and maintain the component of nuclear power plants to reduce the radiation exposure of human operators and improve the reliability of the operation in nuclear power plants. Underwater robot is comprised of two parts; one is robot vehicle and the other is remote control module. Underwater robot vehicle has 4 DOF(degree of freedom) of mobility and 1 DOF of camera observation. The task to inspect the internal of RCS in nuclear power plant is achieved successfully. And the reliability for the maintenance is increased by the aid of tele-operated robot

  16. Application of an Underwater Robot in Reactor Coolant System

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Young-Soo; Jeong, Kyung-Min; Lee, Sung-Uk; Cho, Jai-Wan [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2006-07-01

    Nuclear energy is a major source of electric energy consumed in Korea. It has the advantage of other energy sources, nuclear energy is cost effective and little pollution. But the fearfulness of an accident and/or failure has scared us the utilization of nuclear energy extensively. So, the safety and reliability of nuclear power plants become more important. Inspection and maintenance of component should be achieved continuously. The RCS(reactor coolant system) of PWR(pressurized water reactor) has a role to cool down the reactor's temperature. Cooling water is injected through the SI(safety injection) nozzle into the cold leg of the primary loop. Thermal sleeves are attached inside the cylindrical SI nozzle to reduce the thermal shock of the cooling water to the weld zone of the safety injection nozzle. The human workers are susceptible to radiation exposure and manual handling machine is hard to access because of the complexity of the path. So, we developed and applied free running, tele-operated underwater vehicle to inspect SI nozzle close to the place. Tele-operated robot is useful to inspect and maintain the component of nuclear power plants to reduce the radiation exposure of human operators and improve the reliability of the operation in nuclear power plants. Underwater robot is comprised of two parts; one is robot vehicle and the other is remote control module. Underwater robot vehicle has 4 DOF(degree of freedom) of mobility and 1 DOF of camera observation. The task to inspect the internal of RCS in nuclear power plant is achieved successfully. And the reliability for the maintenance is increased by the aid of tele-operated robot.

  17. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-11-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his Lab Out Loud blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing this website and video, I decided to create my own dashboard videos to show to my high school physics students. I have produced and synchronized 12 separate dashboard videos, each about 10 minutes in length, driving around the city of Lawrence, KS, and Douglas County, and posted them to a website.2 Each video reflects different types of driving: both positive and negative accelerations and constant speeds. As shown in Fig. 1, I was able to capture speed, distance, and miles per gallon from my dashboard instrumentation. By linking this with a stopwatch, each of these quantities can be graphed with respect to time. I anticipate and hope that teachers will find these useful in their own classrooms, i.e., having physics students watch the videos and create their own motion maps (distance-time, speed-time) for study.

  18. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  19. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  20. Semantic Shot Classification in Sports Video

    Science.gov (United States)

    Duan, Ling-Yu; Xu, Min; Tian, Qi

    2003-01-01

    In this paper, we present a unified framework for semantic shot classification in sports videos. Unlike previous approaches, which focus on clustering by aggregating shots with similar low-level features, the proposed scheme makes use of domain knowledge of a specific sport to perform a top-down video shot classification, including identification of video shot classes for each sport, and supervised learning and classification of the given sports video with low-level and middle-level features extracted from the sports video. It is observed that for each sport we can predefine a small number of semantic shot classes, about 5~10, which covers 90~95% of sports broadcasting video. With the supervised learning method, we can map the low-level features to middle-level semantic video shot attributes such as dominant object motion (a player), camera motion patterns, and court shape, etc. On the basis of the appropriate fusion of those middle-level shot classes, we classify video shots into the predefined video shot classes, each of which has a clear semantic meaning. The proposed method has been tested over 4 types of sports videos: tennis, basketball, volleyball and soccer. Good classification accuracy of 85~95% has been achieved. With correctly classified sports video shots, further structural and temporal analysis, such as event detection, video skimming, table of content, etc, will be greatly facilitated.

  1. Large Acrylic Spherical Windows In Hyperbaric Underwater Photography

    Science.gov (United States)

    Lones, Joe J.; Stachiw, Jerry D.

    1983-10-01

    Both acrylic plastic and glass are common materials for hyperbaric optical windows. Although glass continues to be used occasionally for small windows, virtually all large viewports are made of acrylic. It is easy to uderstand the wide use of acrylic when comparing design properties of this plastic with those of glass, and glass windows are relatively more difficult to fabricate and use. in addition there are published guides for the design and fabrication of acrylic windows to be used in the hyperbaric environment of hydrospace. Although these procedures for fabricating the acrylic windows are somewhat involved, the results are extremely reliable. Acrylic viewports are now fabricated to very large sizes for manned observation or optical quality instrumen tation as illustrated by the numerous acrylic submersible vehicle hulls for hu, an occupancy currently in operation and a 3600 large optical window recently developed for the Walt Disney Circle Vision under-water camera housing.

  2. Implementation of vision based 2-DOF underwater Manipulator

    Directory of Open Access Journals (Sweden)

    Geng Jinpeng

    2015-01-01

    Full Text Available Manipulator is of vital importance to the remotely operated vehicle (ROV, especially when it works in the nuclear reactor pool. Two degrees of freedom (2-DOF underwater manipulator is designed to the ROV, which is composed of control cabinet, buoyancy module, propellers, depth gauge, sonar, a monocular camera and other attitude sensors. The manipulator can be used to salvage small parts like bolts and nuts to accelerate the progress of the overhaul. It can move in the vertical direction alone through the control of the second joint, and can grab object using its unique designed gripper. A monocular vision based localization algorithm is applied to help the manipulator work independently and intelligently. Eventually, field experiment is conducted in the swimming pool to verify the effectiveness of the manipulator and the monocular vision based algorithm.

  3. LSST Camera Optics Design

    Energy Technology Data Exchange (ETDEWEB)

    Riot, V J; Olivier, S; Bauman, B; Pratuch, S; Seppala, L; Gilmore, D; Ku, J; Nordby, M; Foss, M; Antilogus, P; Morgado, N

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics will meet their performance goals.

  4. Camera traps as sensor networks for monitoring animal communities

    OpenAIRE

    Kays, R.W.; Kranstauber, B.; Jansen, P.A.; Carbone, C.; Rowcliffe, M.; Fountain, T.; Tilak, S.

    2009-01-01

    Studying animal movement and distribution is of critical importance to addressing environmental challenges including invasive species, infectious diseases, climate and land-use change. Motion sensitive camera traps offer a visual sensor to record the presence of a species at a location, recording their movement in the Eulerian sense. Modern digital camera traps that record video present new analytical opportunities, but also new data management challenges. This paper describes our experience ...

  5. High-speed photography of underwater sympathetic detonation of high explosives

    Science.gov (United States)

    Kubota, Shiro; Shimada, Hideki; Matsui, Kikuo; Liu, Zhi-Yue; Itoh, Shigeru

    2001-04-01

    The donor and the acceptor charges are arranged into the water with the interval. The donor charge has a cylindrical geometry with 30 mm diameter and 50 mm long. The acceptor has a disk form with 100 mm diameter and 10 or 5 mm thickness. Composition B is used for both of the donor and acceptor charges. The propagation processes of underwater shock waves from the top end of acceptor charge along the axis of charges are taken by the image converter camera under the intervals of 10, 15, 20 and 25 mm. In the case of 10 mm thick acceptor charge, the velocities of the underwater shock wave are the almost the same up to the interval of 20 mm. However, in the case of 25 mm the underwater shock wave has remarkable low velocity compared to the other cases. In the case of 20 mm interval, the velocity of the underwater shock wave in the case of the 5 mm thick acceptor is slower than that of 10 mm. Furthermore, the numerical simulations are conducted. The reaction rate law of the high explosive is a phenomenological model that is proposed by Lee and Tarver. The results of the optical measurement and numerical simulation demonstrate a good agreement.

  6. Model based image restoration for underwater images

    Science.gov (United States)

    Stephan, Thomas; Frühberger, Peter; Werling, Stefan; Heizmann, Michael

    2013-04-01

    The inspection of offshore parks, dam walls and other infrastructure under water is expensive and time consuming, because such constructions must be inspected manually by divers. Underwater buildings have to be examined visually to find small cracks, spallings or other deficiencies. Automation of underwater inspection depends on established water-proved imaging systems. Most underwater imaging systems are based on acoustic sensors (sonar). The disadvantage of such an acoustic system is the loss of the complete visual impression. All information embedded in texture and surface reflectance gets lost. Therefore acoustic sensors are mostly insufficient for these kind of visual inspection tasks. Imaging systems based on optical sensors feature an enormous potential for underwater applications. The bandwidth from visual imaging systems reach from inspection of underwater buildings via marine biological applications through to exploration of the seafloor. The reason for the lack of established optical systems for underwater inspection tasks lies in technical difficulties of underwater image acquisition and processing. Lightening, highly degraded images make a computational postprocessing absolutely essential.

  7. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...

  8. Videography-Based Unconstrained Video Analysis.

    Science.gov (United States)

    Li, Kang; Li, Sheng; Oh, Sangmin; Fu, Yun

    2017-05-01

    Video analysis and understanding play a central role in visual intelligence. In this paper, we aim to analyze unconstrained videos, by designing features and approaches to represent and analyze videography styles in the videos. Videography denotes the process of making videos. The unconstrained videos are defined as the long duration consumer videos that usually have diverse editing artifacts and significant complexity of contents. We propose to construct a videography dictionary, which can be utilized to represent every video clip as a sequence of videography words. In addition to semantic features, such as foreground object motion and camera motion, we also incorporate two novel interpretable features to characterize videography, including the scale information and the motion correlations. We then demonstrate that, by using statistical analysis methods, the unique videography signatures extracted from different events can be automatically identified. For real-world applications, we explore the use of videography analysis for three types of applications, including content-based video retrieval, video summarization (both visual and textual), and videography-based feature pooling. In the experiments, we evaluate the performance of our approach and other methods on a large-scale unconstrained video dataset, and show that the proposed approach significantly benefits video analysis in various ways.

  9. 4th Pacific Rim Underwater Acoustics Conference

    CERN Document Server

    Xu, Wen; Cheng, Qianliu; Zhao, Hangfang

    2016-01-01

    These proceedings are a collection of 16 selected scientific papers and reviews by distinguished international experts that were presented at the 4th Pacific Rim Underwater Acoustics Conference (PRUAC), held in Hangzhou, China in October 2013. The topics discussed at the conference include internal wave observation and prediction; environmental uncertainty and coupling to sound propagation; environmental noise and ocean dynamics; dynamic modeling in acoustic fields; acoustic tomography and ocean parameter estimation; time reversal and matched field processing; underwater acoustic localization and communication as well as measurement instrumentations and platforms. These proceedings provide insights into the latest developments in underwater acoustics, promoting the exchange of ideas for the benefit of future research.

  10. A new technique for robot vision in autonomous underwater vehicles using the color shift in underwater imaging

    Science.gov (United States)

    2017-06-01

    FOR ROBOT VISION IN AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING by Jake A. Jones June 2017 Thesis Advisor... VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING 5. FUNDING NUMBERS 6. AUTHOR(S) Jake A. Jones 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS...underwater vehicles (AUVs), robot vision, autonomy, visual odometry, underwater color shift, optical properties of water 15. NUMBER OF PAGES 75 16

  11. Use of video taping during simulator training

    International Nuclear Information System (INIS)

    Helton, M.; Young, P.

    1987-01-01

    The use of a video camera for training is not a new idea and is used throughout the country for training in such areas as computers, car repair, music and even in such non-technical areas as fishing. Reviewing a taped simulator training session will aid the student in his job performance regardless of the position he holds in his organization. If the student is to be examined on simulator performance, video will aid in this training in many different ways

  12. Enhanced Video-Oculography System

    Science.gov (United States)

    Moore, Steven T.; MacDougall, Hamish G.

    2009-01-01

    A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.

  13. The WEBERSAT camera - An inexpensive earth imaging system

    Science.gov (United States)

    Jackson, Stephen; Raetzke, Jeffrey

    WEBERSAT is a 27 pound LEO satellite launched in 1990 into a 500 mile polar orbit. One of its payloads is a low cost CCD color camera system developed by engineering students at Weber State University. The camera is a modified Canon CI-10 with a 25 mm lens, automatic iris, and 780 x 490 pixel resolution. The iris range control potentiometer was made programmable; a 10.7 MHz digitization clock, fixed focus support, and solid tantalum capacitors were added. Camera output signals, composite video, red, green, blue, and the digitization clock are fed to a flash digitizer, where they are processed for storage in RAM. Camera control commands are stored and executed via the onboard computer. The CCD camera has successfully imaged meteorological features of the earth, land masses, and a number of astronomical objects.

  14. Lights, Camera, Reflection!

    Science.gov (United States)

    Mourlam, Daniel

    2013-01-01

    There are many ways to critique teaching, but few are more effective than video. Personal reflection through the use of video allows one to see what really happens in the classrooms--good and bad--and provides a visual path forward for improvement, whether it be in one's teaching, work with a particular student, or learning environment. This…

  15. Jellyfish inspired underwater unmanned vehicle

    Science.gov (United States)

    Villanueva, Alex; Bresser, Scott; Chung, Sanghun; Tadesse, Yonas; Priya, Shashank

    2009-03-01

    An unmanned underwater vehicle (UUV) was designed inspired by the form and functionality of a Jellyfish. These natural organisms were chosen as bio-inspiration for a multitude of reasons including: efficiency of locomotion, lack of natural predators, proper form and shape to incorporate payload, and varying range of sizes. The structure consists of a hub body surrounded by bell segments and microcontroller based drive system. The locomotion of UUV was achieved by shape memory alloy "Biometal Fiber" actuation which possesses large strain and blocking force with adequate response time. The main criterion in design of UUV was the use of low-profile shape memory alloy actuators which act as artificial muscles. In this manuscript, we discuss the design of two Jellyfish prototypes and present experimental results illustrating the performance and power consumption.

  16. stil113_0401r -- Point coverage of locations of still frames extracted from video imagery which depict sediment types

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  17. Underwater Grass Comeback Helps Chesapeake Bay

    Science.gov (United States)

    The fortified Susquehanna Flats, the largest bed of underwater grasses in the Chesapeake Bay, seems able to withstand a major weather punch. Its resilience is contributing to an overall increase in the Bay’s submerged aquatic vegetation.

  18. Underwater Object Segmentation Based on Optical Features

    Directory of Open Access Journals (Sweden)

    Zhe Chen

    2018-01-01

    Full Text Available Underwater optical environments are seriously affected by various optical inputs, such as artificial light, sky light, and ambient scattered light. The latter two can block underwater object segmentation tasks, since they inhibit the emergence of objects of interest and distort image information, while artificial light can contribute to segmentation. Artificial light often focuses on the object of interest, and, therefore, we can initially identify the region of target objects if the collimation of artificial light is recognized. Based on this concept, we propose an optical feature extraction, calculation, and decision method to identify the collimated region of artificial light as a candidate object region. Then, the second phase employs a level set method to segment the objects of interest within the candidate region. This two-phase structure largely removes background noise and highlights the outline of underwater objects. We test the performance of the method with diverse underwater datasets, demonstrating that it outperforms previous methods.

  19. Sensor network architectures for monitoring underwater pipelines.

    Science.gov (United States)

    Mohamed, Nader; Jawhar, Imad; Al-Jaroodi, Jameela; Zhang, Liren

    2011-01-01

    This paper develops and compares different sensor network architecture designs that can be used for monitoring underwater pipeline infrastructures. These architectures are underwater wired sensor networks, underwater acoustic wireless sensor networks, RF (radio frequency) wireless sensor networks, integrated wired/acoustic wireless sensor networks, and integrated wired/RF wireless sensor networks. The paper also discusses the reliability challenges and enhancement approaches for these network architectures. The reliability evaluation, characteristics, advantages, and disadvantages among these architectures are discussed and compared. Three reliability factors are used for the discussion and comparison: the network connectivity, the continuity of power supply for the network, and the physical network security. In addition, the paper also develops and evaluates a hierarchical sensor network framework for underwater pipeline monitoring.

  20. Educational Applications for Digital Cameras.

    Science.gov (United States)

    Cavanaugh, Terence; Cavanaugh, Catherine

    1997-01-01

    Discusses uses of digital cameras in education. Highlights include advantages and disadvantages, digital photography assignments and activities, camera features and operation, applications for digital images, accessory equipment, and comparisons between digital cameras and other digitizers. (AEF)

  1. Underwater photogrammetry successful in Spain and France

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    Underwater photogrammetry has been used to measure distortions in fuel assembly alignment pins in the upper internals of the Almarez and Dampierre PWRs. Photogrammetry is a three-dimensional precision measurement method using photographic techniques for the on-site measurement phase. On the strength of the operations at the two PWRs, underwater photogrammetry is now considered as a practical and effective technique for dimensional inspection at nuclear plants. (U.K.)

  2. Underwater noise levels in UK waters

    OpenAIRE

    Merchant, Nathan D.; Brookes, Kate L.; Faulkner, Rebecca C.; Bicknell, Anthony W. J.; Godley, Brendan J.; Witt, Matthew J.

    2016-01-01

    Underwater noise from human activities appears to be rising, with ramifications for acoustically sensitive marine organisms and the functioning of marine ecosystems. Policymakers are beginning to address the risk of ecological impact, but are constrained by a lack of data on current and historic noise levels. Here, we present the first nationally coordinated effort to quantify underwater noise levels, in support of UK policy objectives under the EU Marine Strategy Framework Directive (MSFD). ...

  3. Underwater gait analysis in Parkinson's disease.

    Science.gov (United States)

    Volpe, Daniele; Pavan, Davide; Morris, Meg; Guiotto, Annamaria; Iansek, Robert; Fortuna, Sofia; Frazzitta, Giuseppe; Sawacha, Zimi

    2017-02-01

    Although hydrotherapy is one of the physical therapies adopted to optimize gait rehabilitation in people with Parkinson disease, the quantitative measurement of gait-related outcomes has not been provided yet. This work aims to document the gait improvements in a group of parkinsonians after a hydrotherapy program through 2D and 3D underwater and on land gait analysis. Thirty-four parkinsonians and twenty-two controls were enrolled, divided into two different cohorts. In the first one, 2 groups of patients underwent underwater or land based walking training; controls underwent underwater walking training. Hence pre-treatment 2D underwater and on land gait analysis were performed, together with post-treatment on land gait analysis. Considering that current literature documented a reduced movement amplitude in parkinsonians across all lower limb joints in all movement planes, 3D underwater and on land gait analysis were performed on a second cohort of subjects (10 parkinsonians and 10 controls) who underwent underwater gait training. Baseline land 2D and 3D gait analysis in parkinsonians showed shorter stride length and slower speed than controls, in agreement with previous findings. Comparison between underwater and on land gait analysis showed reduction in stride length, cadence and speed on both parkinsonians and controls. Although patients who underwent underwater treatment exhibited significant changes on spatiotemporal parameters and sagittal plane lower limb kinematics, 3D gait analysis documented a significant (p<0.05) improvement in all movement planes. These data deserve attention for research directions promoting the optimal recovery and maintenance of walking ability. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  4. Affordable underwater wireless optical communication using LEDs

    Science.gov (United States)

    Pilipenko, Vladimir; Arnon, Shlomi

    2013-09-01

    In recent years the need for high data rate underwater wireless communication (WC) has increased. Nowadays, the conventional technology for underwater communication is acoustic. However, the maximum data rate that acoustic technology can provide is a few kilobits per second. On the other hand, emerging applications such as underwater imaging, networks of sensors and swarms of underwater vehicles require much faster data rates. As a result, underwater optical WC, which can provide much higher data rates, has been proposed as an alternative means of communication. In addition to high data rates, affordable communication systems become an important feature in the development requirements. The outcome of these requirements is a new system design based on off-the-shelf components such as blue and green light emitting diodes (LEDs). This is due to the fact that LEDs offer solutions characterized by low cost, high efficiency, reliability and compactness. However, there are some challenges to be met when incorporating LEDs as part of the optical transmitter, such as low modulation rates and non linearity. In this paper, we review the main challenges facing the incorporation of LEDs as an integral part of underwater WC systems and propose some techniques to mitigate the LED limitations in order to achieve high data rate communication

  5. An underwater optical wireless communication network

    Science.gov (United States)

    Arnon, Shlomi

    2009-08-01

    The growing need for underwater observation and sub-sea monitoring systems has stimulated considerable interest in advancing the enabling technologies of underwater wireless communication and underwater sensor networks. This communication technology is expected to play an important role in investigating climate change, in monitoring biological, bio-geochemical, evolutionary and ecological changes in the sea, ocean and lake environments and in helping to control and maintain oil production facilities and harbors using unmanned underwater vehicles (UUVs), submarines, ships, buoys, and divers. However, the present technology of underwater acoustic communication cannot provide the high data rate required to investigate and monitor these environments and facilities. Optical wireless communication has been proposed as the best alternative to meet this challenge. We present models of three kinds of optical wireless communication links a) a line-of-sight link, b) a modulating retro-reflector link and c) a reflective link, all of which can provide the required data rate. We analyze the link performance based on these models. From the analysis, it is clear that as the water absorption increases, the communication performance decreases dramatically for the three link types. However, by using the scattered lighted it was possible to mitigate this decrease in some cases. We conclude from the analysis that a high data rate underwater optical wireless network is a feasible solution for emerging applications such as UUV to UUV links and networks of sensors, and extended ranges in these applications could be achieved by applying a multi-hop concept.

  6. The laser scanning camera

    International Nuclear Information System (INIS)

    Jagger, M.

    The prototype development of a novel lenseless camera is reported which utilises a laser beam scanned in a raster by means of orthogonal vibrating mirrors to illuminate the field of view. Laser light reflected from the scene is picked up by a conveniently sited photosensitive device and used to modulate the brightness of a T.V. display scanned in synchronism with the moving laser beam, hence producing a T.V. image of the scene. The camera which needs no external lighting system can act in either a wide angle mode or by varying the size and position of the raster can be made to zoom in to view in detail any object within a 40 0 overall viewing angle. The resolution and performance of the camera are described and a comparison of these aspects is made with conventional T.V. cameras. (author)

  7. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  8. Underwater itineraries at Egadi Islands: Marine biodiversity protection through actions for sustainable tourism

    International Nuclear Information System (INIS)

    Cocito, Silvia; Delbono, Ivana; Barsanti, Mattia; Di Nallo, Giuseppina; Lombardi, Chiara; Peirano, Andrea

    2015-01-01

    Sustainable tourism is recognized as a high priority for environmental and biological conservation. Promoting protection of local biological and environmental resources is a useful action for conservation of marine biodiversity in Marine Protected Areas and for stimulating awareness among residents and visitors. The publication of two books dedicated to the description of 28 selected underwater itineraries, for divers and snorkelers, and a web site with underwater videos represent concrete actions by ENEA for the promotion of sustainable tourism at the Marine Protected Area of Egadi Islands (Sicily, Italy). 177 species were recorded at Favignana, and around the same number at Marettimo and Levanzo islands: among those species, some of them are important for conservation and protection (e.g. Astrospartus mediterraneus), some of them are rare (i.e. Anthipatella subpinnata) and with a high aesthetic value (e.g. Paramuricea clavata, Savalia savaglia), while others are invasive (e.g. Caulerpa cylindracea) [it

  9. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    International Nuclear Information System (INIS)

    Anderson, Robert J.

    2014-01-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  10. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  11. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Robotic and Security Systems Dept.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  12. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  13. The Circular Camera Movement

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    It has been an accepted precept in film theory that specific stylistic features do not express specific content. Nevertheless, it is possible to find many examples in the history of film in which stylistic features do express specific content: for instance, the circular camera movement is used re...... such as the circular camera movement. Keywords: embodied perception, embodied style, explicit narration, interpretation, style pattern, television style...

  14. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  15. What Video Styles can do for User Research

    DEFF Research Database (Denmark)

    Blauhut, Daniela; Buur, Jacob

    2009-01-01

    the video camera actually plays in studying people and establishing design collaboration still exists. In this paper we argue that traditional documentary film approaches like Direct Cinema and Cinéma Vérité show that a purely observational approach may not be most valuable for user research and that video...

  16. Cellphones in Classrooms Land Teachers on Online Video Sites

    Science.gov (United States)

    Honawar, Vaishali

    2007-01-01

    Videos of teachers that students taped in secrecy are all over online sites like YouTube and MySpace. Angry teachers, enthusiastic teachers, teachers clowning around, singing, and even dancing are captured, usually with camera phones, for the whole world to see. Some students go so far as to create elaborately edited videos, shot over several…

  17. Human recognition in a video network

    Science.gov (United States)

    Bhanu, Bir

    2009-10-01

    Video networks is an emerging interdisciplinary field with significant and exciting scientific and technological challenges. It has great promise in solving many real-world problems and enabling a broad range of applications, including smart homes, video surveillance, environment and traffic monitoring, elderly care, intelligent environments, and entertainment in public and private spaces. This paper provides an overview of the design of a wireless video network as an experimental environment, camera selection, hand-off and control, anomaly detection. It addresses challenging questions for individual identification using gait and face at a distance and present new techniques and their comparison for robust identification.

  18. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Gagliano, L.; Bryan, T.; MacLeod, T.

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASAs Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  19. An acoustically controlled tetherless underwater vehicle for installation and maintenance of neutrino detectors in the deep ocean

    CERN Document Server

    Ballou, P J

    1997-01-01

    The task of installing and servicing high energy neutrino detectors in the deep ocean from a surface support vessel is problematic using conventional tethered systems. An array of multiple detector strings rising 500 m from the ocean floor, and forming a grid with 50 m spacing between the strings, presents a substantial entanglement hazard for equipment cables deployed from the surface. Such tasks may be accomplished with fewer risks using a tetherless underwater remotely operated vehicle that has a local acoustic telemetry link to send control commands and sensor data between the vehicle and a stationary hydrophone suspended above or just outside the perimeter of the work site. The Phase I effort involves the development of an underwater acoustic telemetry link for vehicle control and sensor feedback, the evaluation of video compression methods for real-time acoustic transmission of video through the water, and the defining of local control routines on board the vehicle to allow it to perform certain basic m...

  20. Opinion rating of comparison photographs of television pictures from CCD cameras under irradiation

    International Nuclear Information System (INIS)

    Reading, V.M.; Dumbreck, A.A.

    1991-01-01

    As part of the development of a general method of testing the effects of gamma radiation on CCD television cameras, this is a report of an experimental study on the optimisation of still photographic representation of video pictures recorded before and during camera irradiation. (author)

  1. Limits on surveillance: frictions, fragilities and failures in the operation of camera surveillance.

    NARCIS (Netherlands)

    Dubbeld, L.

    2004-01-01

    Public video surveillance tends to be discussed in either utopian or dystopian terms: proponents maintain that camera surveillance is the perfect tool in the fight against crime, while critics argue that the use of security cameras is central to the development of a panoptic, Orwellian surveillance

  2. Declarative camera control for automatic cinematography

    Energy Technology Data Exchange (ETDEWEB)

    Christianson, D.B.; Anderson, S.E.; Li-wei He [Univ. of Washington, Seattle, WA (United States)] [and others

    1996-12-31

    Animations generated by interactive 3D computer graphics applications are typically portrayed either from a particular character`s point of view or from a small set of strategically-placed viewpoints. By ignoring camera placement, such applications fail to realize important storytelling capabilities that have been explored by cinematographers for many years. In this paper, we describe several of the principles of cinematography and show how they can be formalized into a declarative language, called the Declarative Camera Control Language (DCCL). We describe the application of DCCL within the context of a simple interactive video game and argue that DCCL represents cinematic knowledge at the same level of abstraction as expert directors by encoding 16 idioms from a film textbook. These idioms produce compelling animations, as demonstrated on the accompanying videotape.

  3. Deployable Wireless Camera Penetrators

    Science.gov (United States)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    A lightweight, low-power camera dart has been designed and tested for context imaging of sampling sites and ground surveys from an aerobot or an orbiting spacecraft in a microgravity environment. The camera penetrators also can be used to image any line-of-sight surface, such as cliff walls, that is difficult to access. Tethered cameras to inspect the surfaces of planetary bodies use both power and signal transmission lines to operate. A tether adds the possibility of inadvertently anchoring the aerobot, and requires some form of station-keeping capability of the aerobot if extended examination time is required. The new camera penetrators are deployed without a tether, weigh less than 30 grams, and are disposable. They are designed to drop from any altitude with the boost in transmitting power currently demonstrated at approximately 100-m line-of-sight. The penetrators also can be deployed to monitor lander or rover operations from a distance, and can be used for surface surveys or for context information gathering from a touch-and-go sampling site. Thanks to wireless operation, the complexity of the sampling or survey mechanisms may be reduced. The penetrators may be battery powered for short-duration missions, or have solar panels for longer or intermittent duration missions. The imaging device is embedded in the penetrator, which is dropped or projected at the surface of a study site at 90 to the surface. Mirrors can be used in the design to image the ground or the horizon. Some of the camera features were tested using commercial "nanny" or "spy" camera components with the charge-coupled device (CCD) looking at a direction parallel to the ground. Figure 1 shows components of one camera that weighs less than 8 g and occupies a volume of 11 cm3. This camera could transmit a standard television signal, including sound, up to 100 m. Figure 2 shows the CAD models of a version of the penetrator. A low-volume array of such penetrator cameras could be deployed from an

  4. A novel super-resolution camera model

    Science.gov (United States)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  5. Reliability of video-based identification of footstrike pattern and video time frame at initial contact in recreational runners

    DEFF Research Database (Denmark)

    Damsted, Camma; Larsen, L H; Nielsen, R.O.

    2015-01-01

    INTRODUCTION: Two-dimensional video recordings are used in clinical practice to identify footstrike pattern. However, knowledge about the reliability of this method of identification is limited. OBJECTIVE: To evaluate intra- and inter-rater reliability of visual identification of footstrike pattern...... and video time frame at initial contact during treadmill running using two-dimensional (2D) video recordings. METHODS: Thirty-one recreational runners were recorded twice, 1 week apart, with a high-speed video camera. Two blinded raters evaluated each video twice with an interval of at least 14 days...

  6. Results of the IMO Video Meteor Network - June 2017, and effective collection area study

    Science.gov (United States)

    Molau, Sirko; Crivello, Stefano; Goncalves, Rui; Saraiva, Carlos; Stomeo, Enrico; Kac, Javor

    2017-12-01

    Over 18000 meteors were recorded by the IMO Video Meteor Network cameras during more than 7100 hours of observing time during 2017 June. The June Bootids were not detectable this year. Nearly 50 Daytime Arietids were recorded in 2017, and a first flux density profile for this shower in the optical domain is calculated, using video data from the period 2011-2017. Effective collection area of video cameras is discussed in more detail.

  7. Using Photogrammetry to Estimate Tank Waste Volumes from Video

    International Nuclear Information System (INIS)

    Field, Jim G.

    2013-01-01

    Washington River Protection Solutions (WRPS) contracted with HiLine Engineering and Fabrication, Inc. to assess the accuracy of photogrammetry tools as compared to video Camera/CAD Modeling System (CCMS) estimates. This test report documents the results of using photogrammetry to estimate the volume of waste in tank 241-C-I04 from post-retrieval videos and results using photogrammetry to estimate the volume of waste piles in the CCMS test video

  8. Using Photogrammetry to Estimate Tank Waste Volumes from Video

    Energy Technology Data Exchange (ETDEWEB)

    Field, Jim G. [Washington River Protection Solutions, LLC, Richland, WA (United States)

    2013-03-27

    Washington River Protection Solutions (WRPS) contracted with HiLine Engineering & Fabrication, Inc. to assess the accuracy of photogrammetry tools as compared to video Camera/CAD Modeling System (CCMS) estimates. This test report documents the results of using photogrammetry to estimate the volume of waste in tank 241-C-I04 from post-retrieval videos and results using photogrammetry to estimate the volume of waste piles in the CCMS test video.

  9. Summarization of Surveillance Video Sequences Using Face Quality Assessment

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.; Rahmati, Mohammad

    2011-01-01

    Constant working surveillance cameras in public places, such as airports and banks, produce huge amount of video data. Faces in such videos can be extracted in real time. However, most of these detected faces are either redundant or useless. Redundant information adds computational costs to facial...... technique for this purpose. The summarized results of this technique have been used in three different facial analysis systems and the experimental results on real video sequences are promising....

  10. Reliability of Three-Dimensional Angular Kinematics and Kinetics of Swimming Derived from Digitized Video.

    Science.gov (United States)

    Sanders, Ross H; Gonjo, Tomohiro; McCabe, Carla B

    2016-03-01

    The purpose of this study was to explore the reliability of estimating three-dimensional (3D) angular kinematics and kinetics of a swimmer derived from digitized video. Two high-level front crawl swimmers and one high level backstroke swimmer were recorded by four underwater and two above water video cameras. One of the front crawl swimmers was digitized at 50 fields per second with a window for smoothing by a 4(th) order Butterworth digital filter extending 10 fields beyond the start and finish of the stroke cycle (FC1), while the other front crawl (FC2) and backstroke (BS) swimmer were digitized at 25 frames per second with the window extending five frames beyond the start and finish of the stroke cycle. Each camera view of one stroke cycle was digitized five times yielding five independent 3D data sets from which whole body centre of mass (CM) yaw, pitch, roll, and torques were derived together with wrist and ankle moment arms with respect to an inertial reference system with origin at the CM. Coefficients of repeatability ranging from r = 0.93 to r = 0.99 indicated that both digitising sampling rates and extrapolation methods are sufficiently reliable to identify real differences in net torque production. This will enable the sources of rotations about the three axes to be explained in future research. Errors in angular kinematics and displacements of the wrist and ankles relative to range of motion were small for all but the ankles in the X (swimming) direction for FC2 who had a very vigorous kick. To avoid large errors when digitising the ankles of swimmers with vigorous kicks it is recommended that a marker on the shank could be used to calculate the ankle position based on the known displacements between knee, shank, and ankle markers. Key pointsUsing the methods described, an inverse dynamics approach based on 3D position data digitized manually from multiple camera views above and below the water surface is sufficiently reliable to yield insights

  11. Video Analytics

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Distante, Cosimo; Hua, Gang

    2017-01-01

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...

  12. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...

  13. The Dark Energy Camera

    Science.gov (United States)

    Flaugher, B.; Diehl, H. T.; Honscheid, K.; Abbott, T. M. C.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Antonik, M.; Ballester, O.; Beaufore, L.; Bernstein, G. M.; Bernstein, R. A.; Bigelow, B.; Bonati, M.; Boprie, D.; Brooks, D.; Buckley-Geer, E. J.; Campa, J.; Cardiel-Sas, L.; Castander, F. J.; Castilla, J.; Cease, H.; Cela-Ruiz, J. M.; Chappa, S.; Chi, E.; Cooper, C.; da Costa, L. N.; Dede, E.; Derylo, G.; DePoy, D. L.; de Vicente, J.; Doel, P.; Drlica-Wagner, A.; Eiting, J.; Elliott, A. E.; Emes, J.; Estrada, J.; Fausti Neto, A.; Finley, D. A.; Flores, R.; Frieman, J.; Gerdes, D.; Gladders, M. D.; Gregory, B.; Gutierrez, G. R.; Hao, J.; Holland, S. E.; Holm, S.; Huffman, D.; Jackson, C.; James, D. J.; Jonas, M.; Karcher, A.; Karliner, I.; Kent, S.; Kessler, R.; Kozlovsky, M.; Kron, R. G.; Kubik, D.; Kuehn, K.; Kuhlmann, S.; Kuk, K.; Lahav, O.; Lathrop, A.; Lee, J.; Levi, M. E.; Lewis, P.; Li, T. S.; Mandrichenko, I.; Marshall, J. L.; Martinez, G.; Merritt, K. W.; Miquel, R.; Muñoz, F.; Neilsen, E. H.; Nichol, R. C.; Nord, B.; Ogando, R.; Olsen, J.; Palaio, N.; Patton, K.; Peoples, J.; Plazas, A. A.; Rauch, J.; Reil, K.; Rheault, J.-P.; Roe, N. A.; Rogers, H.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schindler, R. H.; Schmidt, R.; Schmitt, R.; Schubnell, M.; Schultz, K.; Schurter, P.; Scott, L.; Serrano, S.; Shaw, T. M.; Smith, R. C.; Soares-Santos, M.; Stefanik, A.; Stuermer, W.; Suchyta, E.; Sypniewski, A.; Tarle, G.; Thaler, J.; Tighe, R.; Tran, C.; Tucker, D.; Walker, A. R.; Wang, G.; Watson, M.; Weaverdyck, C.; Wester, W.; Woods, R.; Yanny, B.; DES Collaboration

    2015-11-01

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel-1. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6-9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  14. THE DARK ENERGY CAMERA

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, B.; Diehl, H. T.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Buckley-Geer, E. J. [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Honscheid, K. [Center for Cosmology and Astro-Particle Physics, The Ohio State University, Columbus, OH 43210 (United States); Abbott, T. M. C.; Bonati, M. [Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, Casilla 603, La Serena (Chile); Antonik, M.; Brooks, D. [Department of Physics and Astronomy, University College London, Gower Street, London, WC1E 6BT (United Kingdom); Ballester, O.; Cardiel-Sas, L. [Institut de Física d’Altes Energies, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Barcelona (Spain); Beaufore, L. [Department of Physics, The Ohio State University, Columbus, OH 43210 (United States); Bernstein, G. M. [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 (United States); Bernstein, R. A. [Carnegie Observatories, 813 Santa Barbara St., Pasadena, CA 91101 (United States); Bigelow, B.; Boprie, D. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States); Campa, J. [Centro de Investigaciones Energèticas, Medioambientales y Tecnológicas (CIEMAT), Madrid (Spain); Castander, F. J., E-mail: diehl@fnal.gov [Institut de Ciències de l’Espai, IEEC-CSIC, Campus UAB, Facultat de Ciències, Torre C5 par-2, E-08193 Bellaterra, Barcelona (Spain); Collaboration: DES Collaboration; and others

    2015-11-15

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel{sup −1}. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  15. Underwater Noise Modeling in Lithuanian Area of the Baltic Sea

    Directory of Open Access Journals (Sweden)

    Donatas Bagočius

    2017-09-01

    Full Text Available Along with rising awareness of public and scientific societies about environmental and ecological impacts of underwater noise, the need for underwater noise modelling in the shallow Lithuanian area of Baltic Sea emerged. Marine Strategy Framework Directive issues regarding underwater noise indicators refers to possibility of evaluation of Good Environmental State using underwater noise measurements as well as possibility to model underwater noise. Main anthropogenic underwater noise contributor in the Seas is the shipping lanes as known due to date, with no exclusion of Lithuanian Baltic Sea area. In this manuscript, it is presented the methods of development of simplistic underwater ambient noise model purposed for computation of underwater soundscape in shallow area of the Lithuanian Baltic Sea.

  16. Underwater Sensor Networks: A New Energy Efficient and Robust Architecture

    NARCIS (Netherlands)

    Climent, Salvador; Capella, Juan Vincente; Meratnia, Nirvana; Serrano, Juan José

    2012-01-01

    The specific characteristics of underwater environments introduce new challenges for networking protocols. In this paper, a specialized architecture for underwater sensor networks (UWSNs) is proposed and evaluated. Experiments are conducted in order to analyze the suitability of this protocol for

  17. Magnetohydrodynamic underwater vehicular propulsion systems

    International Nuclear Information System (INIS)

    Swallom, D.W.; Sadovnik, I.; Gibbs, J.S.; Gurol, H.; Nguyen, L.

    1990-01-01

    The development of magnetohydrodynamic propulsion systems for underwater vehicles is discussed. According to the authors, it is a high risk endeavor that offers the possibility of a number of significant advantages over conventional propeller propulsion systems. These advantages may include the potential for greater stealth characteristics, increased maneuverability, enhanced survivability, elimination of cavitation limits, and addition of a significant emergency propulsion system. The possibility of increased stealth is by far the most important advantage. A conceptual design study has been completed with numerical results that shows that these advantages may be obtained with a magnetohydrodynamic propulsion system in an annular configuration externally surrounding a generic study submarine that is neutrally buoyant and can operate with the existing submarine propulsion system power plant. The classical submarine mission requirements make the use of these characteristics of the magnetohydrodynamic propulsion system particularly appropriate for submarine missions. The magnetohydrodynamic annular propulsion system for a generic attack class submarine has been designed to take advantage of the magnetohydrodynamic thruster characteristics

  18. Routing strategies for underwater gliders

    Science.gov (United States)

    Davis, Russ E.; Leonard, Naomi E.; Fratantoni, David M.

    2009-02-01

    Gliders are autonomous underwater vehicles that achieve long operating range by moving at speeds comparable to those of, or slower than, typical ocean currents. This paper addresses routing gliders to rapidly reach a specified waypoint or to maximize the ability to map a measured field, both in the presence of significant currents. For rapid transit in a frozen velocity field, direct minimization of travel time provides a trajectory "ray" equation. A simpler routing algorithm that requires less information is also discussed. Two approaches are developed to maximize the mapping ability, as measured by objective mapping error, of arrays of vehicles. In order to produce data sets that are readily interpretable, both approaches focus sampling near predetermined "ideal tracks" by measuring mapping skill only on those tracks, which are laid out with overall mapping skill in mind. One approach directly selects each vehicle's headings to maximize instantaneous mapping skill integrated over the entire array. Because mapping skill decreases when measurements are clustered, this method automatically coordinates glider arrays to maintain spacing. A simpler method that relies on manual control for array coordination employs a first-order control loop to balance staying close to the ideal track and maintaining vehicle speed to maximize mapping skill. While the various techniques discussed help in dealing with the slow speed of gliders, nothing can keep performance from being degraded when current speeds are comparable to vehicle speed. This suggests that glider utility could be greatly enhanced by the ability to operate high speeds for short periods when currents are strong.

  19. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    der dog opstået nye muligheder og udfordringer i forhold til at formidle og distribuere forskningsresultater til forskellige målgrupper via video. Samtidig er klassiske metodologiske problematikker som forskerens positionering i forhold til det undersøgte stadig aktuelle. Både klassiske og nye...

  20. Habitat diversity in the Northeastern Gulf of Mexico: Selected video clips from the Gulfstream Natural Gas Pipeline digital archive

    Science.gov (United States)

    Raabe, Ellen A.; D'Anjou, Robert; Pope, Domonique K.; Robbins, Lisa L.

    2011-01-01

    This project combines underwater video with maps and descriptions to illustrate diverse seafloor habitats from Tampa Bay, Florida, to Mobile Bay, Alabama. A swath of seafloor was surveyed with underwater video to 100 meters (m) water depth in 1999 and 2000 as part of the Gulfstream Natural Gas System Survey. The U.S. Geological Survey (USGS) in St. Petersburg, Florida, in cooperation with Eckerd College and the Florida Department of Environmental Protection (FDEP), produced an archive of analog-to-digital underwater movies. Representative clips of seafloor habitats were selected from hundreds of hours of underwater footage. The locations of video clips were mapped to show the distribution of habitat and habitat transitions. The numerous benthic habitats in the northeastern Gulf of Mexico play a vital role in the region's economy, providing essential resources for tourism, natural gas, recreational water sports (fishing, boating, scuba diving), materials, fresh food, energy, a source of sand for beach renourishment, and more. These submerged natural resources are important to the economy but are often invisible to the general public. This product provides a glimpse of the seafloor with sample underwater video, maps, and habitat descriptions. It was developed to depict the range and location of seafloor habitats in the region but is limited by depth and by the survey track. It should not be viewed as comprehensive, but rather as a point of departure for inquiries and appreciation of marine resources and seafloor habitats. Further information is provided in the Resources section.

  1. Survivability design for a hybrid underwater vehicle

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Biao; Wu, Chao; Li, Xiang; Zhao, Qingkai; Ge, Tong [State Key Lab of Ocean Engineering, School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2015-03-10

    A novel hybrid underwater robotic vehicle (HROV) capable of working to the full ocean depth has been developed. The battery powered vehicle operates in two modes: operate as an untethered autonomous vehicle in autonomous underwater vehicle (AUV) mode and operate under remote control connected to the surface vessel by a lightweight, fiber optic tether in remotely operated vehicle (ROV) mode. Considering the hazardous underwater environment at the limiting depth and the hybrid operating modes, survivability has been placed on an equal level with the other design attributes of the HROV since the beginning of the project. This paper reports the survivability design elements for the HROV including basic vehicle design of integrated navigation and integrated communication, emergency recovery strategy, distributed architecture, redundant bus, dual battery package, emergency jettison system and self-repairing control system.

  2. Survivability design for a hybrid underwater vehicle

    International Nuclear Information System (INIS)

    Wang, Biao; Wu, Chao; Li, Xiang; Zhao, Qingkai; Ge, Tong

    2015-01-01

    A novel hybrid underwater robotic vehicle (HROV) capable of working to the full ocean depth has been developed. The battery powered vehicle operates in two modes: operate as an untethered autonomous vehicle in autonomous underwater vehicle (AUV) mode and operate under remote control connected to the surface vessel by a lightweight, fiber optic tether in remotely operated vehicle (ROV) mode. Considering the hazardous underwater environment at the limiting depth and the hybrid operating modes, survivability has been placed on an equal level with the other design attributes of the HROV since the beginning of the project. This paper reports the survivability design elements for the HROV including basic vehicle design of integrated navigation and integrated communication, emergency recovery strategy, distributed architecture, redundant bus, dual battery package, emergency jettison system and self-repairing control system

  3. Recent developments in underwater repair welding

    International Nuclear Information System (INIS)

    Offer, H.P.; Chapman, T.L.; Willis, E.R.; Maslakowski, J.; Van Diemen, P.; Smith, B.W.

    2001-01-01

    As nuclear plants age and reactor internal components begin to show increased evidence of age-related phenomena such as corrosion and fatigue, interest in the development of cost-effective mitigation and repair remedies grows. One technology currently receiving greater development and application program focus is underwater welding. Underwater welding, as used herein, is the application of weld metal to a substrate surface that is wet, but locally dry in the immediate area surrounding the welding torch. The locally dry environment is achieved by the use of a mechanical device that is specifically designed for water exclusion from the welding torch, surface to be welded, and the welding groove. This paper will explore recent developments in the use of underwater welding as a mitigation and repair technique. (author)

  4. A Video Processing and Data Retrieval Framework for Fish Population Monitoring

    NARCIS (Netherlands)

    E.M.A.L. Beauxis-Aussalet (Emmanuelle); S. Palazzo; G. Nadarajan; E. Arslanova (Elvira); C. Spampinato (Concetto); L. Hardman (Lynda)

    2013-01-01

    htmlabstractIn this work we present a framework for fish population monitoring through the analysis of underwater videos. We specifically focus on the user information needs, and on the dynamic data extraction and retrieval mechanisms that support them. Sophisticated though a software tool may be,

  5. Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network

    Directory of Open Access Journals (Sweden)

    Zhibin Yu

    2017-01-01

    Full Text Available Underwater inherent optical properties (IOPs are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.

  6. Development of measuring and control systems for underwater cutting of radioactive components

    International Nuclear Information System (INIS)

    Drews, P.; Fuchs, K.

    1990-01-01

    Shutdown and dismantling of nuclear power plants requires special techniques to decommission the radioactive components involved. For reasons of safety, decommissioning of components under water can be advantageous because of the radioactive shielding effect of water. In this project, research activities and developmental works focused on the realization of different sensor systems and their adaptation to cutting tasks. A new image-processing system has been developed in addition to the use of a modified underwater TV camera for optical cutting process control (plasma and abrasive wheel cutting). For control of process parameters, different inductive, ultrasonic and optical sensors have been modified and tested. The investigations performed are aimed at assuring high-quality underwater cutting with the help of sensor systems specially adapted to cutting tasks, with special signal procession and evaluation through microcomputer control. It is important that special attention be paid to the reduction of interferences in image pick-up and procession. The measuring system has been designed and realized according to the consideration of the demands for underwater cutting processes. The reliability of the system was tested in conjunction with a four-axes handling system

  7. Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.

    Science.gov (United States)

    Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui

    2017-01-01

    Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.

  8. Authentication Approaches for Standoff Video Surveillance

    International Nuclear Information System (INIS)

    Baldwin, G.; Sweatt, W.; Thomas, M.

    2015-01-01

    Video surveillance for international nuclear safeguards applications requires authentication, which confirms to an inspector reviewing the surveillance images that both the source and the integrity of those images can be trusted. To date, all such authentication approaches originate at the camera. Camera authentication would not suffice for a ''standoff video'' application, where the surveillance camera views an image piped to it from a distant objective lens. Standoff video might be desired in situations where it does not make sense to expose sensitive and costly camera electronics to contamination, radiation, water immersion, or other adverse environments typical of hot cells, reprocessing facilities, and within spent fuel pools, for example. In this paper, we offer optical architectures that introduce a standoff distance of several metres between the scene and camera. Several schemes enable one to authenticate not only that the extended optical path is secure, but also that the scene is being viewed live. They employ optical components with remotely-operated spectral, temporal, directional, and intensity properties that are under the control of the inspector. If permitted by the facility operator, illuminators, reflectors and polarizers placed in the scene offer further possibilities. Any tampering that would insert an alternative image source for the camera, although undetectable with conventional cryptographic authentication of digital camera data, is easily exposed using the approaches we describe. Sandia National Laboratories is a multi-programme laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. Support to Sandia National Laboratories provided by the NNSA Next Generation Safeguards Initiative is gratefully acknowledged. SAND2014-3196 A. (author)

  9. Communities, Cameras, and Conservation

    Science.gov (United States)

    Patterson, Barbara

    2012-01-01

    Communities, Cameras, and Conservation (CCC) is the most exciting and valuable program the author has seen in her 30 years of teaching field science courses. In this citizen science project, students and community volunteers collect data on mountain lions ("Puma concolor") at four natural areas and public parks along the Front Range of Colorado.…

  10. Mars Observer camera

    Science.gov (United States)

    Malin, M. C.; Danielson, G. E.; Ingersoll, A. P.; Masursky, H.; Veverka, J.; Ravine, M. A.; Soulanille, T. A.

    1992-01-01

    The Mars Observer camera (MOC) is a three-component system (one narrow-angle and two wide-angle cameras) designed to take high spatial resolution pictures of the surface of Mars and to obtain lower spatial resolution, synoptic coverage of the planet's surface and atmosphere. The cameras are based on the 'push broom' technique; that is, they do not take 'frames' but rather build pictures, one line at a time, as the spacecraft moves around the planet in its orbit. MOC is primarily a telescope for taking extremely high resolution pictures of selected locations on Mars. Using the narrow-angle camera, areas ranging from 2.8 km x 2.8 km to 2.8 km x 25.2 km (depending on available internal digital buffer memory) can be photographed at about 1.4 m/pixel. Additionally, lower-resolution pictures (to a lowest resolution of about 11 m/pixel) can be acquired by pixel averaging; these images can be much longer, ranging up to 2.8 x 500 km at 11 m/pixel. High-resolution data will be used to study sediments and sedimentary processes, polar processes and deposits, volcanism, and other geologic/geomorphic processes.

  11. The world's fastest camera

    CERN Multimedia

    Piquepaille, Roland

    2006-01-01

    This image processor is not your typical digital camera. It took 6 years to 20 people and $6 million to build the "Regional Calorimeter Trigger"(RCT) which will be a component of the Compact Muon Solenoid (CMS) experiment, one of the detectors on the Large Hadron Collider (LHC) in Geneva, Switzerland (1 page)

  12. Camera as Cultural Critique

    DEFF Research Database (Denmark)

    Suhr, Christian

    2015-01-01

    researchers, cameras, and filmed subjects already inherently comprise analytical decisions. It is these ethnographic qualities inherent in audiovisual and photographic imagery that make it of particular value to a participatory anthropological enterprise that seeks to resist analytic closure and seeks instead...

  13. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  14. CRED Optical Validation Data at the island of Ta'u in American Samoa, 2006 to support Benthic Habitat Mapping (TOAD)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Optical validation data were collected using a Tethered Optical Assessment Device (TOAD), an underwater sled equipped with an underwater digital video camera and...

  15. CRED Optical Validation Data at the island of Ta'u in American Samoa, 2004 to Support Benthic Habitat Mapping (TOAD)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Optical validation data were collected using a Tethered Optical Assessment Device (TOAD), an underwater sled equipped with an underwater digital video camera and...

  16. CRED Optical Validation Data at the islands of Ofu and Olosega in American Samoa, 2004 to Support Benthic Habitat Mapping (TOAD)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Optical validation data were collected using a Tethered Optical Assessment Device (TOAD), an underwater sled equipped with an underwater digital video camera and...

  17. Towed Optical Assessment Device (TOAD) Data to Support Benthic Habitat Mapping since 2001

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Optical validation data were collected using a Tethered Optical Assessment Device (TOAD), an underwater sled equipped with an underwater digital video camera and...

  18. Efficient Modelling Methodology for Reconfigurable Underwater Robots

    DEFF Research Database (Denmark)

    Nielsen, Mikkel Cornelius; Blanke, Mogens; Schjølberg, Ingrid

    2016-01-01

    This paper considers the challenge of applying reconfigurable robots in an underwater environment. The main result presented is the development of a model for a system comprised of N, possibly heterogeneous, robots dynamically connected to each other and moving with 6 Degrees of Freedom (DOF......). This paper presents an application of the Udwadia-Kalaba Equation for modelling the Reconfigurable Underwater Robots. The constraints developed to enforce the rigid connection between robots in the system is derived through restrictions on relative distances and orientations. To avoid singularities...... in the orientation and, thereby, allow the robots to undertake any relative configuration the attitude is represented in Euler parameters....

  19. Underwater laser cutting of metallic structures

    International Nuclear Information System (INIS)

    Alfille, J.P.; Schildknecht, J.; Ramaswami, V.S.

    1993-01-01

    In the frame of an european contract, the feasibility of the underwater cutting with a CO 2 laser power is studied. The aim of this work is the dismantling metallic structures of reactors pools. The paper analyzes the general concept of the experimental device, the underwater cutting head, the experimenting vessel, examples of cuttings in dismantling situation with a 500 W CO 2 laser, and examples of cuttings with a 5 kW CO 2 laser. (author). 2 refs., 9 figs., 2 tabs

  20. Underwater noise from offshore oil production vessels.

    Science.gov (United States)

    Erbe, Christine; McCauley, Robert; McPherson, Craig; Gavrilov, Alexander

    2013-06-01

    Underwater acoustic recordings of six Floating Production Storage and Offloading (FPSO) vessels moored off Western Australia are presented. Monopole source spectra were computed for use in environmental impact assessments of underwater noise. Given that operations on the FPSOs varied over the period of recording, and were sometimes unknown, the authors present a statistical approach to noise level estimation. No significant or consistent aspect dependence was found for the six FPSOs. Noise levels did not scale with FPSO size or power. The 5th, 50th (median), and 95th percentile source levels (broadband, 20 to 2500 Hz) were 188, 181, and 173 dB re 1 μPa @ 1 m, respectively.

  1. CCD Camera Detection of HIV Infection.

    Science.gov (United States)

    Day, John R

    2017-01-01

    Rapid and precise quantification of the infectivity of HIV is important for molecular virologic studies, as well as for measuring the activities of antiviral drugs and neutralizing antibodies. An indicator cell line, a CCD camera, and image-analysis software are used to quantify HIV infectivity. The cells of the P4R5 line, which express the receptors for HIV infection as well as β-galactosidase under the control of the HIV-1 long terminal repeat, are infected with HIV and then incubated 2 days later with X-gal to stain the infected cells blue. Digital images of monolayers of the infected cells are captured using a high resolution CCD video camera and a macro video zoom lens. A software program is developed to process the images and to count the blue-stained foci of infection. The described method allows for the rapid quantification of the infected cells over a wide range of viral inocula with reproducibility, accuracy and at relatively low cost.

  2. The PAU Camera

    Science.gov (United States)

    Casas, R.; Ballester, O.; Cardiel-Sas, L.; Carretero, J.; Castander, F. J.; Castilla, J.; Crocce, M.; de Vicente, J.; Delfino, M.; Fernández, E.; Fosalba, P.; García-Bellido, J.; Gaztañaga, E.; Grañena, F.; Jiménez, J.; Madrid, F.; Maiorino, M.; Martí, P.; Miquel, R.; Neissner, C.; Ponce, R.; Sánchez, E.; Serrano, S.; Sevilla, I.; Tonello, N.; Troyano, I.

    2011-11-01

    The PAU Camera (PAUCam) is a wide-field camera designed to be mounted at the William Herschel Telescope (WHT) prime focus, located at the Observatorio del Roque de los Muchachos in the island of La Palma (Canary Islands).Its primary function is to carry out a cosmological survey, the PAU Survey, covering an area of several hundred square degrees of sky. Its purpose is to determine positions and distances using photometric redshift techniques. To achieve accurate photo-z's, PAUCam will be equipped with 40 narrow-band filters covering the range from 450 to850 nm, and six broad-band filters, those of the SDSS system plus the Y band. To fully cover the focal plane delivered by the telescope optics, 18 CCDs 2k x 4k are needed. The pixels are square of 15 μ m size. The optical characteristics of the prime focus corrector deliver a field-of-view where eight of these CCDs will have an illumination of more than 95% covering a field of 40 arc minutes. The rest of the CCDs will occupy the vignetted region extending the field diameter to one degree. Two of the CCDs will be devoted to auto-guiding.This camera have some innovative features. Firstly, both the broad-band and the narrow-band filters will be placed in mobile trays, hosting 16 such filters at most. Those are located inside the cryostat at few millimeters in front of the CCDs when observing. Secondly, a pressurized liquid nitrogen tank outside the camera will feed a boiler inside the cryostat with a controlled massflow. The read-out electronics will use the Monsoon architecture, originally developed by NOAO, modified and manufactured by our team in the frame of the DECam project (the camera used in the DES Survey).PAUCam will also be available to the astronomical community of the WHT.

  3. MISR radiometric camera-by-camera Cloud Mask V004

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the Radiometric camera-by-camera Cloud Mask dataset. It is used to determine whether a scene is classified as clear or cloudy. A new parameter has...

  4. Analysis of Camera Arrays Applicable to the Internet of Things.

    Science.gov (United States)

    Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing

    2016-03-22

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.

  5. Managed Video as a Service for a Video Surveillance Model

    Directory of Open Access Journals (Sweden)

    Dan Benta

    2009-01-01

    Full Text Available The increasing demand for security systems hasresulted in rapid development of video surveillance and videosurveillance has turned into a major area of interest andmanagement challenge. Personal experience in specializedcompanies helped me to adapt demands of users of videosecurity systems to system performance. It is known thatpeople wish to obtain maximum profit with minimum effort,but security is not neglected. Surveillance systems and videomonitoring should provide only necessary information and torecord only when there is activity. Via IP video surveillanceservices provides more safety in this sector, being able torecord information on servers located in other locations thanthe IP cameras. Also, these systems allow real timemonitoring of goods or activities that take place in supervisedperimeters. View live and recording can be done via theInternet from any computer, using a web browser. Access tothe surveillance system is granted after a user and passwordauthentication.

  6. still114_0402b-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  7. still116_0501d-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  8. Olympic Coast National Marine Sanctuary - stil120_0602a - Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during September 2006. Video data...

  9. still116_0501s-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  10. still116_0501c-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  11. Olympic Coast National Marine Sanctuary - vid120_0602a - Point coverage of locations of video imagery depicting sediment types at various locations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during September 2006. Video data...

  12. still115_0403-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  13. still114_0402c-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  14. still116_0501n-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  15. Robust spatiotemporal matching of electronic slides to presentation videos.

    Science.gov (United States)

    Fan, Quanfu; Barnard, Kobus; Amir, Arnon; Efrat, Alon

    2011-08-01

    We describe a robust and efficient method for automatically matching and time-aligning electronic slides to videos of corresponding presentations. Matching electronic slides to videos provides new methods for indexing, searching, and browsing videos in distance-learning applications. However, robust automatic matching is challenging due to varied frame composition, slide distortion, camera movement, low-quality video capture, and arbitrary slides sequence. Our fully automatic approach combines image-based matching of slide to video frames with a temporal model for slide changes and camera events. To address these challenges, we begin by extracting scale-invariant feature-transformation (SIFT) keypoints from both slides and video frames, and matching them subject to a consistent projective transformation (homography) by using random sample consensus (RANSAC). We use the initial set of matches to construct a background model and a binary classifier for separating video frames showing slides from those without. We then introduce a new matching scheme for exploiting less distinctive SIFT keypoints that enables us to tackle more difficult images. Finally, we improve upon the matching based on visual information by using estimated matching probabilities as part of a hidden Markov model (HMM) that integrates temporal information and detected camera operations. Detailed quantitative experiments characterize each part of our approach and demonstrate an average accuracy of over 95% in 13 presentation videos.

  16. Design of video interface conversion system based on FPGA

    Science.gov (United States)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  17. New nuclear medicine gamma camera systems

    International Nuclear Information System (INIS)

    Villacorta, Edmundo V.

    1997-01-01

    The acquisition of the Open E.CAM and DIACAM gamma cameras by Makati Medical Center is expected to enhance the capabilities of its nuclear medicine facilities. When used as an aid to diagnosis, nuclear medicine entails the introduction of a minute amount of radioactive material into the patient; thus, no reaction or side-effect is expected. When it reaches the particular target organ, depending on the radiopharmaceutical, a lesion will appear as a decrease (cold) area or increase (hot) area in the radioactive distribution as recorded byu the gamma cameras. Gamma camera images in slices or SPECT (Single Photon Emission Computer Tomography), increase the sensitivity and accuracy in detecting smaller and deeply seated lesions, which otherwise may not be detected in the regular single planar images. Due to the 'open' design of the equipment, claustrophobic patients will no longer feel enclosed during the procedure. These new gamma cameras yield improved resolution and superb image quality, and the higher photon sensitivity shortens imaging acquisition time. The E.CAM, which is the latest generation gamma camera, is featured by its variable angle dual-head system, the only one available in the Philipines, and the excellent choice for Myocardial Perfusion Imaging (MPI). From the usual 45 minutes, the acquisition time for gated SPECT imaging of the heart has now been remarkably reduced to 12 minutes. 'Gated' infers snap-shots of the heart in selected phases of its contraction and relaxation as triggered by ECG. The DIACAM is installed in a room with access outside the main entrance of the department, intended specially for bed-borne patients. Both systems are equipped with a network of high performance Macintosh ICOND acquisition and processing computers. Added to the hardware is the ICON processing software which allows total simultaneous acquisition and processing capabilities in the same operator's terminal. Video film and color printers are also provided. Together

  18. High-Speed Smart Camera with High Resolution

    Directory of Open Access Journals (Sweden)

    J. Dubois

    2007-02-01

    Full Text Available High-speed video cameras are powerful tools for investigating for instance the biomechanics analysis or the movements of mechanical parts in manufacturing processes. In the past years, the use of CMOS sensors instead of CCDs has enabled the development of high-speed video cameras offering digital outputs, readout flexibility, and lower manufacturing costs. In this paper, we propose a high-speed smart camera based on a CMOS sensor with embedded processing. Two types of algorithms have been implemented. A compression algorithm, specific to high-speed imaging constraints, has been implemented. This implementation allows to reduce the large data flow (6.55 Gbps and to propose a transfer on a serial output link (USB 2.0. The second type of algorithm is dedicated to feature extraction such as edge detection, markers extraction, or image analysis, wavelet analysis, and object tracking. These image processing algorithms have been implemented into an FPGA embedded inside the camera. These implementations are low-cost in terms of hardware resources. This FPGA technology allows us to process in real time 500 images per second with a 1280×1024 resolution. This camera system is a reconfigurable platform, other image processing algorithms can be implemented.

  19. High-Speed Smart Camera with High Resolution

    Directory of Open Access Journals (Sweden)

    Mosqueron R

    2007-01-01

    Full Text Available High-speed video cameras are powerful tools for investigating for instance the biomechanics analysis or the movements of mechanical parts in manufacturing processes. In the past years, the use of CMOS sensors instead of CCDs has enabled the development of high-speed video cameras offering digital outputs, readout flexibility, and lower manufacturing costs. In this paper, we propose a high-speed smart camera based on a CMOS sensor with embedded processing. Two types of algorithms have been implemented. A compression algorithm, specific to high-speed imaging constraints, has been implemented. This implementation allows to reduce the large data flow (6.55 Gbps and to propose a transfer on a serial output link (USB 2.0. The second type of algorithm is dedicated to feature extraction such as edge detection, markers extraction, or image analysis, wavelet analysis, and object tracking. These image processing algorithms have been implemented into an FPGA embedded inside the camera. These implementations are low-cost in terms of hardware resources. This FPGA technology allows us to process in real time 500 images per second with a 1280×1024 resolution. This camera system is a reconfigurable platform, other image processing algorithms can be implemented.

  20. IVO develops a new repair technique for underwater sites. Viscous doughlike substance underwater cracks

    Energy Technology Data Exchange (ETDEWEB)

    Klingstedt, G.; Leisio, C. [ed.

    1998-07-01

    A viscous sealant is revolutionizing repair of the stone and concrete masonry of underwater dams, bridges and canals. There is now no need for expensive and time-consuming cofferdams, since a diver can extrude quick-setting mortar into underwater structures needing repair. This technique has worked well in recent years in various parts of Finland even in strongly flowing water. IVO experts are now starting to look more beyond the borders of Finland

  1. Robust automatic camera pointing for airborne surveillance

    Science.gov (United States)

    Dwyer, David; Wren, Lee; Thornton, John; Bonsor, Nigel

    2002-08-01

    Airborne electro-optic surveillance from a moving platform currently requires regular interaction from a trained operator. Even simple tasks such as fixating on a static point on the ground can demand constant adjustment of the camera orientation to compensate for platform motion. In order to free up operator time for other tasks such as navigation and communication with ground assets, an automatic gaze control system is needed. This paper describes such a system, based purely on tracking points within the video image. A number of scene points are automatically selected and their inter-frame motion tracked. The scene motion is then estimated using a model of a planar projective transform. For reliable and accurate camera pointing, the modeling of the scene motion must be robust to common problems such as scene point obscuration, objects moving independently within the scene and image noise. This paper details a COTS based system for automatic camera fixation and describes ways of preventing objects moving in the scene or poor motion estimates from corrupting the scene motion model.

  2. The modular integrated video system (MIVS): A new generation of video surveillance equipment

    International Nuclear Information System (INIS)

    Gaertner, K.J.; Dawes, E.W.

    1990-01-01

    Over the years, one of the ''workhorses'' of the IAEA's safeguards system has been an 8-mm film camera used for surveillance purposes at many safeguarded nuclear facilities around the world. Recently, however, the Agency has been moving away from the use of these units in favour of advanced video systems that today have taken over the market. Production of 8-mm film and cameras has been virtually discontinued worldwide. The Agency's transition to modern video systems, and the replacement of aging 8-mm cameras in some 290 nuclear facilities, has proven to be a challenging and difficult effort in terms of technology, quality assurance, cost effectiveness, and scheduling. This article describes the development of three alternate video systems to replace the 8-mm film camera being developed through IAEA safeguards support programmes with Japan, the Federal Republic of Germany, and the United States. It reviews the progress made in various areas, and describes the features and advantages of one system - the modular integrated video system (MIVS) - which is going to be deployed as a primary safeguards tool through the 1990s

  3. KeproVt : underwater robotic system for visual inspection of nuclear reactor internals

    International Nuclear Information System (INIS)

    Cho, Byung-Hak; Byun, Seung-Hyun; Shin, Chang-Hoon; Yang, Jang-Bum; Song, Sung-Il; Oh, Jung-Mook

    2004-01-01

    An underwater robotic system for visual inspection of reactor vessel internals has been developed. The Korea Electric Power Robot for Visual Test (KeproVt) consists of an underwater robot, a vision processor based measuring unit, a master control station and a servo control station. The vision processor based measuring unit employs a first-of-a-kind engineering technology in nuclear robotics. The vision processor makes use of a camera located at the top of the water level referenced to the reactor center line to get an image of the robot, and computes the location and orientation of the robot. The robot guided by the control station with the measuring unit can be controlled to have any motion at any position in the reactor vessel with ±1 cm positioning and ±2 deg. heading accuracies with enough precision to inspect reactor internals. A simple and fast installation process is emphasized in the developed system. The installation process consists of hooking a vision camera on the guide rail of the refueling machine and putting a small robot (14.5 kg in weight) in the reactor cavity pool. The easy installation and automatic operation meet the demand of shortening the reactor outage and reducing the number of inspection personnel. The developed robotic system was successfully deployed at the Yonggwang Nuclear Unit 1 for the visual inspection of reactor internals

  4. ORASIS- a coastal video monitoring platform

    Science.gov (United States)

    Vousdoukas, Michalis

    2013-04-01

    A Coastal Video Monitoring system typically consists of one or more video cameras, connected to a computer acquiring coastal imagery 10 min every hour during daylight, with an acquisition frequency of 1-4 Hz. Images are processed to generate the system's 'basic products'. i.e. time-averaged, variance, snapshot and timestack images, which are all projected in geographic coordinates using standard photogrammetric techniques. Following, a set of post-processing tools allows daily monitoring of the intertidal topography, nearshore bar and shoreline position, as well as swash motions. ORASIS is a platform which has been deployed at 4 sites up to now; Faro Beach (Portugal), Cadiz (Spain), and Ammoudara and Koutsounari beach (Creta, Greece), all unique and very challenging sites in terms of coastal morphodynamics (http://www.vousdoukas.fzk-nth.de/index_video.html). ORASIS is not hardware, but (i) software developed to acquire and process coastal imagery; and (ii) expertise related to the selection and installation of different camera models and lenses. The existing coastal monitoring systems have been based on different operating systems, computers, as well as different combinations of camera and lens models, depending on the project's budget and specific needs. Different open source GUI applications are available to estimate intrinsic and extrinsic camera parameters, geo-rectify images and extract the shoreline, as well as generate swash time series form timestack images.

  5. Object Detection and Tracking-Based Camera Calibration for Normalized Human Height Estimation

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-01-01

    Full Text Available This paper presents a normalized human height estimation algorithm using an uncalibrated camera. To estimate the normalized human height, the proposed algorithm detects a moving object and performs tracking-based automatic camera calibration. The proposed method consists of three steps: (i moving human detection and tracking, (ii automatic camera calibration, and (iii human height estimation and error correction. The proposed method automatically calibrates camera by detecting moving humans and estimates the human height using error correction. The proposed method can be applied to object-based video surveillance systems and digital forensic.

  6. A smart camera for High Dynamic Range imaging

    OpenAIRE

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2013-01-01

    International audience; A camera or a video camera is able to capture only a part of a high dynamic range scene information. The same scene can be almost totally perceived by the human visual system. This is true especially for real scenes where the difference in light intensity between the dark areas and bright areas is high. The imaging technique which can overcome this problem is called HDR (High Dynamic Range). It produces images from a set of multiple LDR images (Low Dynamic Range), capt...

  7. SHIP CLASSIFICATION FROM MULTISPECTRAL VIDEOS

    Directory of Open Access Journals (Sweden)

    Frederique Robert-Inacio

    2012-05-01

    Full Text Available Surveillance of a seaport can be achieved by different means: radar, sonar, cameras, radio communications and so on. Such a surveillance aims, on the one hand, to manage cargo and tanker traffic, and, on the other hand, to prevent terrorist attacks in sensitive areas. In this paper an application to video-surveillance of a seaport entrance is presented, and more particularly, the different steps enabling to classify mobile shapes. This classification is based on a parameter measuring the similarity degree between the shape under study and a set of reference shapes. The classification result describes the considered mobile in terms of shape and speed.

  8. Underwater Advanced Time-Domain Electromagnetic System

    Science.gov (United States)

    2017-03-03

    sufficiently waterproofed ...................................................................... 20 Objective: Calibration method can be used both topside... additional background variability is observed at early times, as illustrated in Figure 15. The layout of this figure is the same as Figure 14. Now the...are discussed in the following sections and summarized in Table 5. Objective: System is sufficiently waterproofed The array remained underwater up to

  9. Underwater Adhesives Retrofit Pipelines with Advanced Sensors

    Science.gov (United States)

    2015-01-01

    Houston-based Astro Technology Inc. used a partnership with Johnson Space Center to pioneer an advanced fiber-optic monitoring system for offshore oil pipelines. The company's underwater adhesives allow it to retrofit older deepwater systems in order to measure pressure, temperature, strain, and flow properties, giving energy companies crucial data in real time and significantly decreasing the risk of a catastrophe.

  10. Detection of Underwater UXOs in Mud

    Science.gov (United States)

    2013-04-01

    2nd International Conference on Underwater Acoustic Measurements, Crete, Greece, 2007. 16 [10] P.T. Gough and D.W. Hawkins “Imaging algorithms...course. Runs 275 and 325 folla.v the same trad < and run 322 foUows a track on the opposite side of the swath. The LF SAS image of run 325 is shown

  11. Adaptive turbo equalization for underwater acoustic communication

    NARCIS (Netherlands)

    Cannelli, L; Leus, G.; Dol, H.S.; Walree, P.A. van

    2013-01-01

    In this paper a multiband transceiver designed for underwater channels is presented. Multi-branch filtering at the receiver is used to leverage the diversity offered by a multi-scale multi-lag scenario. The multi-branch bank of filters is constructed by estimating scale and delay coefficients

  12. Underwater noise generated by offshore pile driving

    NARCIS (Netherlands)

    Tsouvalas, A.

    2015-01-01

    Anthropogenic noise emission in the marine environment has always been an environmental issue of serious concern. In particular, the noise generated during the installation of foundation piles is considered to be one of the most significant sources of underwater noise pollution. This is mainly

  13. Evolution: Fossil Ears and Underwater Sonar.

    Science.gov (United States)

    Lambert, Olivier

    2016-08-22

    A key innovation in the history of whales was the evolution of a sonar system together with high-frequency hearing. Fossils of an archaic toothed whale's inner ear bones provide clues for a stepwise emergence of underwater echolocation ability. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Impacts of underwater noise on marine vertebrates

    NARCIS (Netherlands)

    Liebschner, Alexander; Seibel, Henrike; Teilmann, Jonas; Wittekind, Dietrich; Parmentier, Eric; Dähne, Michael; Dietz, Rune; Driver, Jörg; Elk, van Cornelis; Everaarts, Eligius; Findeisen, Henning; Kristensen, Jacob; Lehnert, Kristina; Lucke, Klaus; Merck, Thomas; Müller, Sabine; Pawliczka, Iwona; Ronnenberg, Katrin; Rosenberger, Tanja; Ruser, Andreas; Tougaard, Jakob; Schuster, Max; Sundermeyer, Janne; Sveegaard, Signe; Siebert, Ursula

    2016-01-01

    The project conducts application-oriented research on impacts of underwater noise on marine vertebrates in the North and Baltic Seas. In distinct subprojects, the hearing sensitivity of harbor porpoises and gray seals as well as the acoustic tolerance limit of harbor porpoises to impulsive noise

  15. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1986-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each ring contains a plurality of scintillation detectors which are positioned around an inner circumference with a septum ring extending inwardly from the inner circumference along each outer edge of each ring. An additional septum ring is positioned in the middle of each ring of detectors and parallel to the other septa rings, whereby the inward extent of all the septa rings may be reduced by one-half and the number of detectors required in each ring is reduced. The additional septa reduces the costs of the positron camera and improves its performance

  16. The NEAT Camera Project

    Science.gov (United States)

    Jr., Ray L. Newburn

    1995-01-01

    The NEAT (Near Earth Asteroid Tracking) camera system consists of a camera head with a 6.3 cm square 4096 x 4096 pixel CCD, fast electronics, and a Sun Sparc 20 data and control computer with dual CPUs, 256 Mbytes of memory, and 36 Gbytes of hard disk. The system was designed for optimum use with an Air Force GEODSS (Ground-based Electro-Optical Deep Space Surveillance) telescope. The GEODSS telescopes have 1 m f/2.15 objectives of the Ritchey-Chretian type, designed originally for satellite tracking. Installation of NEAT began July 25 at the Air Force Facility on Haleakala, a 3000 m peak on Maui in Hawaii.

  17. Gamma camera display system

    International Nuclear Information System (INIS)

    Stout, K.J.

    1976-01-01

    A gamma camera having an array of photomultipliers coupled via pulse shaping circuitry and a resistor weighting circuit to a display for forming an image of a radioactive subject is described. A linearizing circuit is coupled to the weighting circuit, the linearizing circuit including a nonlinear feedback circuit with diode coupling to the weighting circuit for linearizing the correspondence between points of the display and points of the subject. 4 Claims, 5 Drawing Figures

  18. Scanning gamma camera

    International Nuclear Information System (INIS)

    Engdahl, L.W.; Batter, J.F. Jr.; Stout, K.J.

    1977-01-01

    A scanning system for a gamma camera providing for the overlapping of adjacent scan paths is described. A collimator mask having tapered edges provides for a graduated reduction in intensity of radiation received by a detector thereof, the reduction in intensity being graduated in a direction normal to the scanning path to provide a blending of images of adjacent scan paths. 31 claims, 15 figures

  19. Methods and Algorithms for Detecting Objects in Video Files

    Directory of Open Access Journals (Sweden)

    Nguyen The Cuong

    2018-01-01

    Full Text Available Video files are files that store motion pictures and sounds like in real life. In today's world, the need for automated processing of information in video files is increasing. Automated processing of information has a wide range of application including office/home surveillance cameras, traffic control, sports applications, remote object detection, and others. In particular, detection and tracking of object movement in video file plays an important role. This article describes the methods of detecting objects in video files. Today, this problem in the field of computer vision is being studied worldwide.

  20. Physics Girl: Where Education meets Cat Videos

    Science.gov (United States)

    Cowern, Dianna

    YouTube is usually considered an entertainment medium to watch cats, gaming, and music videos. But educational channels have been gaining momentum on the platform, some garnering millions of subscribers and billions of views. The Physics Girl YouTube channel is an educational series with PBS Digital Studios created by Dianna Cowern. Using Physics Girl as an example, this talk will examine what it takes to start a short-form educational video series, including logistics and resources. One benefit of video is that every failure is documented on camera and can, and will, be used in this talk as a learning tool. We will look at the channels demographical reach, discuss best practices for effective physics outreach, and survey how online media and technology can facilitate good and bad learning. The aim of this talk is to show how videos are a unique way to share science and enrich the learning experience, in and out of a classroom.

  1. Temporal compressive imaging for video

    Science.gov (United States)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  2. High-rate wireless data communications: An underwater acoustic communications framework at the physical layer

    Directory of Open Access Journals (Sweden)

    Bessios Anthony G.

    1996-01-01

    Full Text Available A variety of signal processing functions are performed by Underwater Acoustic Systems. These include: 1 detection to determine presence or absence of information signals in the presence of noise, or an attempt to describe which of a predetermined finite set of possible messages { m i , i , ... , M } the signal represents; 2 estimation of some parameter θ ˆ associated with the received signal (i.e. range, depth, bearing angle, etc.; 3 classification and source identification; 4 dynamics tracking; 5 navigation (collision avoidance and terminal guidance; 6 countermeasures; and 7 communications. The focus of this paper is acoustic communications. There is a global current need to develop reliable wireless digital communications for the underwater environment, with sufficient performance and efficiency to substitute for costly wired systems. One possible goal is a wireless system implementation that insures underwater terminal mobility. There is also a vital need to improve the performance of the existing systems in terms of data-rate, noise immunity, operational range, and power consumption, since, in practice, portable high-speed, long range, compact, low-power systems are desired. We concede the difficulties associated with acoustic systems and concentrate on the development of robust data transmission methods anticipating the eventual need for real time or near real time video transmission. An overview of the various detection techniques and the general statistical digital communication problem is given based on a statistical decision theory framework. The theoretical formulation of the underwater acoustic data communications problem includes modeling of the stochastic channel to incorporate a variety of impairments and environmental uncertainties, and proposal of new compensation strategies for an efficient and robust receiver design.

  3. Direct observation of microcavitation in underwater adhesion of mushroom-shaped adhesive microstructure

    Directory of Open Access Journals (Sweden)

    Lars Heepe

    2014-06-01

    Full Text Available In this work we report on experiments aimed at testing the cavitation hypothesis [Varenberg, M.; Gorb, S. J. R. Soc., Interface 2008, 5, 383–385] proposed to explain the strong underwater adhesion of mushroom-shaped adhesive microstructures (MSAMSs. For this purpose, we measured the pull-off forces of individual MSAMSs by detaching them from a glass substrate under different wetting conditions and simultaneously video recording the detachment behavior at very high temporal resolution (54,000–100,000 fps. Although microcavitation was observed during the detachment of individual MSAMSs, which was a consequence of water inclusions present at the glass–MSAMS contact interface subjected to negative pressure (tension, the pull-off forces were consistently lower, around 50%, of those measured under ambient conditions. This result supports the assumption that the recently observed strong underwater adhesion of MSAMS is due to an air layer between individual MSAMSs [Kizilkan, E.; Heepe, L.; Gorb, S. N. Underwater adhesion of mushroom-shaped adhesive microstructure: An air-entrapment effect. In Biological and biomimetic adhesives: Challenges and opportunities; Santos, R.; Aldred, N.; Gorb, S. N.; Flammang, P., Eds.; The Royal Society of Chemistry: Cambridge, U.K., 2013; pp 65–71] rather than by cavitation. These results obtained due to the high-speed visualisation of the contact behavior at nanoscale-confined interfaces allow for a microscopic understanding of the underwater adhesion of MSAMSs and may aid in further development of artificial adhesive microstructures for applications in predominantly liquid environments.

  4. Neutral-beam performance analysis using a CCD camera

    International Nuclear Information System (INIS)

    Hill, D.N.; Allen, S.L.; Pincosy, P.A.

    1986-01-01

    We have developed an optical diagnostic system suitable for characterizing the performance of energetic neutral beams. An absolutely calibrated CCD video camera is used to view the neutral beam as it passes through a relatively high pressure (10 -5 Torr) region outside the neutralizer: collisional excitation of the fast deuterium atoms produces H/sub proportional to/ emission (lambda = 6561A) that is proportional to the local atomic current density, independent of the species mix of accelerated ions over the energy range 5 to 20 keV. Digital processing of the video signal provides profile and aiming information for beam optimization. 6 refs., 3 figs

  5. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  6. Bring your own camera to the trap: An inexpensive, versatile, and portable triggering system tested on wild hummingbirds.

    Science.gov (United States)

    Rico-Guevara, Alejandro; Mickley, James

    2017-07-01

    The study of animals in the wild offers opportunities to collect relevant information on their natural behavior and abilities to perform ecologically relevant tasks. However, it also poses challenges such as accounting for observer effects, human sensory limitations, and the time intensiveness of this type of research. To meet these challenges, field biologists have deployed camera traps to remotely record animal behavior in the wild. Despite their ubiquity in research, many commercial camera traps have limitations, and the species and behavior of interest may present unique challenges. For example, no camera traps support high-speed video recording. We present a new and inexpensive camera trap system that increases versatility by separating the camera from the triggering mechanism. Our system design can pair with virtually any camera and allows for independent positioning of a variety of sensors, all while being low-cost, lightweight, weatherproof, and energy efficient. By using our specialized trigger and customized sensor configurations, many limitations of commercial camera traps can be overcome. We use this system to study hummingbird feeding behavior using high-speed video cameras to capture fast movements and multiple sensors placed away from the camera to detect small body sizes. While designed for hummingbirds, our application can be extended to any system where specialized camera or sensor features are required, or commercial camera traps are cost-prohibitive, allowing camera trap use in more research avenues and by more researchers.

  7. Computationally efficient approach to three-dimensional point cloud reconstruction from video image sequences

    Science.gov (United States)

    Chang, Chih-Hsiang; Kehtarnavaz, Nasser

    2014-05-01

    This paper presents a computationally efficient solution to three-dimensional point cloud reconstruction from video image sequences that are captured by a hand-held camera. Our solution starts with a frame selection step to remove frames that cause physically nonrealizable reconstruction outcomes. Then, a computationally efficient approach for obtaining the absolute camera pose is introduced based on pairwise relative camera poses. This is followed by a computationally efficient rotation registration to update the absolute camera pose. The reconstruction results obtained based on actual video sequences indicate lower computation times and lower reprojection errors of the introduced approach compared to the conventional approach.

  8. Personal video retrieval and browsing for mobile users

    Science.gov (United States)

    Sachinopoulou, Anna; Makela, Satu-Marja; Jarvinen, Sari; Westermann, Utz; Peltola, Johannes; Pietarila, Paavo

    2005-03-01

    The latest camera-equipped mobile phones and faster cellular networks have increased the interest in mobile multimedia services. But for content consumption, delivery and creation, the limited capabilities of mobile terminals require special attention. This paper introduces the Candela platform, an infrastructure that allows the creation, storage and retrieval of home videos with special consideration of mobile terminals. Candela features a J2ME-based video recording and annotation tool which permits the creation and annotation of home videos on mobile phones. It offers an MPEG-7-based home video database which can be queried in an intelligent and user-oriented manner exploiting users" personal domain ontologies. The platform employs terminal profiling techniques to deliver video retrieval user interfaces that personalize the search results according to the user's preferences and terminal capabilities, facilitating effective retrieval of home videos via various both mobile and fixed terminals. For video playout, Candela features a meta player, a video player augmented by an interactive metadata display which can be used for fast content-based in-video browsing, helping to avoid the consumption and streaming of uninteresting video parts, thus reducing network load. Thereby, Candela forms a comprehensive video management platform for mobile phones fully covering mobile home video management from acquisition to delivery.

  9. Improving the Quality of Color Colonoscopy Videos

    Directory of Open Access Journals (Sweden)

    Dahyot Rozenn

    2008-01-01

    Full Text Available Abstract Colonoscopy is currently one of the best methods to detect colorectal cancer. Nowadays, one of the widely used colonoscopes has a monochrome chipset recording successively at 60 Hz and components merged into one color video stream. Misalignments of the channels occur each time the camera moves, and this artefact impedes both online visual inspection by doctors and offline computer analysis of the image data. We propose to restore this artefact by first equalizing the color channels and then performing a robust camera motion estimation and compensation.

  10. REAL TIME SPEED ESTIMATION FROM MONOCULAR VIDEO

    Directory of Open Access Journals (Sweden)

    M. S. Temiz

    2012-07-01

    Full Text Available In this paper, detailed studies have been performed for developing a real time system to be used for surveillance of the traffic flow by using monocular video cameras to find speeds of the vehicles for secure travelling are presented. We assume that the studied road segment is planar and straight, the camera is tilted downward a bridge and the length of one line segment in the image is known. In order to estimate the speed of a moving vehicle from a video camera, rectification of video images is performed to eliminate the perspective effects and then the interest region namely the ROI is determined for tracking the vehicles. Velocity vectors of a sufficient number of reference points are identified on the image of the vehicle from each video frame. For this purpose sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in the image space are transformed to the object space to find the absolute values of these magnitudes. The accuracy of the estimated speed is approximately ±1 – 2 km/h. In order to solve the real time speed estimation problem, the authors have written a software system in C++ programming language. This software system has been used for all of the computations and test applications.

  11. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  12. Stability analysis of hybrid-driven underwater glider

    Science.gov (United States)

    Niu, Wen-dong; Wang, Shu-xin; Wang, Yan-hui; Song, Yang; Zhu, Ya-qiang

    2017-10-01

    Hybrid-driven underwater glider is a new type of unmanned underwater vehicle, which combines the advantages of autonomous underwater vehicles and traditional underwater gliders. The autonomous underwater vehicles have good maneuverability and can travel with a high speed, while the traditional underwater gliders are highlighted by low power consumption, long voyage, long endurance and good stealth characteristics. The hybrid-driven underwater gliders can realize variable motion profiles by their own buoyancy-driven and propeller propulsion systems. Stability of the mechanical system determines the performance of the system. In this paper, the Petrel-II hybrid-driven underwater glider developed by Tianjin University is selected as the research object and the stability of hybrid-driven underwater glider unitedly controlled by buoyancy and propeller has been targeted and evidenced. The dimensionless equations of the hybrid-driven underwater glider are obtained when the propeller is working. Then, the steady speed and steady glide path angle under steady-state motion have also been achieved. The steady-state operating conditions can be calculated when the hybrid-driven underwater glider reaches the desired steady-state motion. And the steadystate operating conditions are relatively conservative at the lower bound of the velocity range compared with the range of the velocity derived from the method of the composite Lyapunov function. By calculating the hydrodynamic coefficients of the Petrel-II hybrid-driven underwater glider, the simulation analysis has been conducted. In addition, the results of the field trials conducted in the South China Sea and the Danjiangkou Reservoir of China have been presented to illustrate the validity of the analysis and simulation, and to show the feasibility of the method of the composite Lyapunov function which verifies the stability of the Petrel-II hybrid-driven underwater glider.

  13. The Effect of Nano-Aluminumpowder on the Characteristic of RDX based Aluminized Explosives Underwater Close-Filed Explosion

    Directory of Open Access Journals (Sweden)

    Junting Yin

    2017-01-01

    Full Text Available In order to investigate the effect of nano-aluminum powder on the characteristic of RDX based aluminized explosives underwater closed-filed explosions, the scanning photographs along the radial of the charges were gained by a high speed scanning camera. The photographs of two different aluminized explosives underwater explosion have been analyzed, the shock wave curves and expand curves of detonation products were obtained, furthermore the change rules of shock waves propagation velocity, shock front pressure and expansion of detonation products of two aluminized explosives were investigated, and also the parameters of two aluminized explosives were contrasted. The results show that the aluminized explosive which with nano-aluminum whose initial shock waves pressure propagation velocity, shock front pressure are smaller than the aluminized explosive without nano-aluminum and has lower decrease rate attenuation of energy.

  14. Multi-spectral CCD camera system for ocean water color and seacoast observation

    Science.gov (United States)

    Zhu, Min; Chen, Shiping; Wu, Yanlin; Huang, Qiaolin; Jin, Weiqi

    2001-10-01

    One of the earth observing instruments on HY-1 Satellite which will be launched in 2001, the multi-spectral CCD camera system, is developed by Beijing Institute of Space Mechanics & Electricity (BISME), Chinese Academy of Space Technology (CAST). In 798 km orbit, the system can provide images with 250 m ground resolution and a swath of 500 km. It is mainly used for coast zone dynamic mapping and oceanic watercolor monitoring, which include the pollution of offshore and coast zone, plant cover, watercolor, ice, terrain underwater, suspended sediment, mudflat, soil and vapor gross. The multi- spectral camera system is composed of four monocolor CCD cameras, which are line array-based, 'push-broom' scanning cameras, and responding for four spectral bands. The camera system adapts view field registration; that is, each camera scans the same region at the same moment. Each of them contains optics, focal plane assembly, electrical circuit, installation structure, calibration system, thermal control and so on. The primary features on the camera system are: (1) Offset of the central wavelength is better than 5 nm; (2) Degree of polarization is less than 0.5%; (3) Signal-to-noise ratio is about 1000; (4) Dynamic range is better than 2000:1; (5) Registration precision is better than 0.3 pixel; (6) Quantization value is 12 bit.

  15. Thrust producing mechanisms in ray-inspired underwater vehicle propulsion

    Directory of Open Access Journals (Sweden)

    Geng Liu

    2015-01-01

    Full Text Available This paper describes a computational study of the hydrodynamics of a ray-inspired underwater vehicle conducted concurrently with experimental measurements. High-resolution stereo-videos of the vehicle’s fin motions during steady swimming are obtained and used as a foundation for developing a high fidelity geometrical model of the oscillatory fin. A Cartesian grid based immersed boundary solver is used to examine the flow fields produced due to these complex artificial pectoral fin kinematics. Simulations are carried out at a smaller Reynolds number in order to examine the hydrodynamic performance and understand the resultant wake topology. Results show that the vehicle’s fins experience large spanwise inflexion of the distal part as well as moderate chordwise pitching during the oscillatory motion. Most thrust force is generated by the distal part of the fin, and it is highly correlated with the spanwise inflexion. Two sets of inter-connected vortex rings are observed in the wake right behind each fin. Those vortex rings induce strong backward flow jets which are mainly responsible for the fin thrust generation.

  16. Mechanical model of the ultrafast underwater trap of Utricularia

    Science.gov (United States)

    Joyeux, Marc; Vincent, Olivier; Marmottant, Philippe

    2011-02-01

    The underwater traps of the carnivorous plants of the Utricularia species catch their prey through the repetition of an “active slow deflation followed by passive fast suction” sequence. In this paper, we propose a mechanical model that describes both phases and strongly supports the hypothesis that the trap door acts as a flexible valve that buckles under the combined effects of pressure forces and the mechanical stimulation of trigger hairs, and not as a panel articulated on hinges. This model combines two different approaches, namely (i) the description of thin membranes as triangle meshes with strain and curvature energy, and (ii) the molecular dynamics approach, which consists of computing the time evolution of the position of each vertex of the mesh according to Langevin equations. The only free parameter in the expression of the elastic energy is the Young's modulus E of the membranes. The values for this parameter are unequivocally obtained by requiring that the trap model fires, like real traps, when the pressure difference between the outside and the inside of the trap reaches about 15 kPa. Among other results, our simulations show that, for a pressure difference slightly larger than the critical one, the door buckles, slides on the threshold, and finally swings wide open, in excellent agreement with the sequence observed in high-speed videos.

  17. Gaze-enabled Egocentric Video Summarization via Constrained Submodular Maximization.

    Science.gov (United States)

    Xut, Jia; Mukherjee, Lopamudra; Li, Yin; Warner, Jamieson; Rehg, James M; Singht, Vikas

    2015-06-01

    With the proliferation of wearable cameras, the number of videos of users documenting their personal lives using such devices is rapidly increasing. Since such videos may span hours, there is an important need for mechanisms that represent the information content in a compact form (i.e., shorter videos which are more easily browsable/sharable). Motivated by these applications, this paper focuses on the problem of egocentric video summarization. Such videos are usually continuous with significant camera shake and other quality issues. Because of these reasons, there is growing consensus that direct application of standard video summarization tools to such data yields unsatisfactory performance. In this paper, we demonstrate that using gaze tracking information (such as fixation and saccade) significantly helps the summarization task. It allows meaningful comparison of different image frames and enables deriving personalized summaries (gaze provides a sense of the camera wearer's intent). We formulate a summarization model which captures common-sense properties of a good summary, and show that it can be solved as a submodular function maximization with partition matroid constraints, opening the door to a rich body of work from combinatorial optimization. We evaluate our approach on a new gaze-enabled egocentric video dataset (over 15 hours), which will be a valuable standalone resource.

  18. Human recognition at a distance in video

    CERN Document Server

    Bhanu, Bir

    2010-01-01

    Most biometric systems employed for human recognition require physical contact with, or close proximity to, a cooperative subject. Far more challenging is the ability to reliably recognize individuals at a distance, when viewed from an arbitrary angle under real-world environmental conditions. Gait and face data are the two biometrics that can be most easily captured from a distance using a video camera. This comprehensive and logically organized text/reference addresses the fundamental problems associated with gait and face-based human recognition, from color and infrared video data that are

  19. Astronaut Susan J. Helms Mounts a Videao Camera in Zarya

    Science.gov (United States)

    2001-01-01

    Astronaut Susan J. Helms, Expedition Two flight engineer, mounts a video camera onto a bracket in the Russian Zarya or Functional Cargo Block (FGB) of the International Space Station (ISS). Launched by a Russian Proton rocket from the Baikonu Cosmodrome on November 20, 1998, the Unites States-funded and Russian-built Zarya was the first element of the ISS, followed by the U.S. Unity Node.

  20. Interactive Augmentation of Live Images using a HDR Stereo Camera

    OpenAIRE

    Korn, Matthias; Stange, Maik; von Arb, Andreas; Blum, Lisa; Kreil, Michael; Kunze, Kathrin-Jennifer; Anhenn, Jens; Wallrath, Timo; Grosch, Thorsten

    2007-01-01

    Adding virtual objects to real environments plays an important role in todays computer graphics: Typical examples are virtual furniture in a real room and virtual characters in real movies. For a believable appearance, consistent lighting of the virtual objects is required. We present an augmented reality system that displays virtual objects with consistent illumination and shadows in the image of a simple webcam. We use two high dynamic range video cameras with fisheye lenses permanently rec...

  1. 13 point video tape quality guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Gaunt, R.

    1997-05-01

    Until high definition television (ATV) arrives, in the U.S. we must still contend with the National Television Systems Committee (NTSC) video standard (or PAL or SECAM-depending on your country). NTSC, a 40-year old standard designed for transmission of color video camera images over a small bandwidth, is not well suited for the sharp, full-color images that todays computers are capable of producing. PAL and SECAM also suffers from many of NTSC`s problems, but to varying degrees. Video professionals, when working with computer graphic (CG) images, use two monitors: a computer monitor for producing CGs and an NTSC monitor to view how a CG will look on video. More often than not, the NTSC image will differ significantly from the CG image, and outputting it to NTSC as an artist works enables the him or her to see the images as others will see it. Below are thirteen guidelines designed to increase the quality of computer graphics recorded onto video tape. Viewing your work in NTSC and attempting to follow the below tips will enable you to create higher quality videos. No video is perfect, so don`t expect to abide by every guideline every time.

  2. Distributed embedded smart cameras architectures, design and applications

    CERN Document Server

    Velipasalar, Senem

    2014-01-01

    This publication addresses distributed embedded smart camerascameras that perform onboard analysis and collaborate with other cameras. This book provides the material required to better understand the architectural design challenges of embedded smart camera systems, the hardware/software ecosystem, the design approach for, and applications of distributed smart cameras together with the state-of-the-art algorithms. The authors concentrate on the architecture, hardware/software design, realization of smart camera networks from applications to architectures, in particular in the embedded and mobile domains. •                    Examines energy issues related to wireless communication such as decreasing energy consumption to increase battery-life •                    Discusses processing large volumes of video data on an embedded environment in real-time •                    Covers design of realistic applications of distributed and embedded smart...

  3. Underwater videography and photography in Gulf of Kachchh. Sponsored by Gujarat Ecological Society, Vadodara, Gujarat

    Digital Repository Service at National Institute of Oceanography (India)

    Marine Archaeology Centre (MAC) has been carrying out underwater explorations and excavations of ancient ports and sunken shipwrecks to preserve underwater cultural heritage. MAC has the infrastructure facility to carry out underwater investigations...

  4. Hydrodynamic Coefficients Identification and Experimental Investigation for an Underwater Vehicle

    Directory of Open Access Journals (Sweden)

    Shaorong XIE

    2014-02-01

    Full Text Available Hydrodynamic coefficients are the foundation of unmanned underwater vehicles modeling and controller design. In order to reduce identification complexity and acquire necessary hydrodynamic coefficients for controllers design, the motion of the unmanned underwater vehicle was separated into vertical motion and horizontal motion models. Hydrodynamic coefficients were regarded as mapping parameters from input forces and moments to output velocities and acceleration of the unmanned underwater vehicle. The motion models of the unmanned underwater vehicle were nonlinear and Genetic Algorithm was adopted to identify those hydrodynamic coefficients. To verify the identification quality, velocities and acceleration of the unmanned underwater vehicle was measured using inertial sensor under the same conditions as Genetic Algorithm identification. Curves similarity between measured velocities and acceleration and those identified by Genetic Algorithm were used as optimizing standard. It is found that the curves similarity were high and identified hydrodynamic coefficients of the unmanned underwater vehicle satisfied the measured motion states well.

  5. Delay Tolerance in Underwater Wireless Communications: A Routing Perspective

    Directory of Open Access Journals (Sweden)

    Safdar Hussain Bouk

    2016-01-01

    Full Text Available Similar to terrestrial networks, underwater wireless networks (UWNs also aid several critical tasks including coastal surveillance, underwater pollution detection, and other maritime applications. Currently, once underwater sensor nodes are deployed at different levels of the sea, it is nearly impossible or very expensive to reconfigure the hardware, for example, battery. Taking this issue into account, considerable amount of research has been carried out to ensure minimum energy costs and reliable communication between underwater nodes and base stations. As a result, several different network protocols were proposed for UWN, including MAC, PHY, transport, and routing. Recently, a new paradigm was introduced claiming that the intermittent nature of acoustic channel and signal resulted in designing delay tolerant routing schemes for the UWN, known as an underwater delay tolerant network. In this paper, we provide a comprehensive survey of underwater routing protocols with emphasis on the limitations, challenges, and future open issues in the context of delay tolerant network routing.

  6. Holografisk Video

    OpenAIRE

    Waldemarsson, Lars-Åke

    2006-01-01

    Detta examensarbete utgår ifrån en artikel i vilken en metod för att skapa holografisk video beskrivs. Syftet med arbetet är att återskapa denna metod. Metoden bygger på projicering av hologram med hjälp av delar från en projektor, en laser och några linser. Först görs en litteraturstudie för att få förståelse över hur metoden fungerar. Här behandlas hur ögat ser djup och vilka olika typer av displayer det finns för att återge tredimensionella holografiska bilder. Vidare beskrivs skillnaden m...

  7. Video Meteor Fluxes

    Science.gov (United States)

    Campbell-Brown, M. D.; Braid, D.

    2011-01-01

    estimate the flux (Love & Brownlee, 1993); here the physical area of the detector is well known, but the masses depend strongly on the unknown velocity distribution. In the same size range, Thomas & Netherway (1989) used the narrow-beam radar at Jindalee to calculate the flux of sporadics. In between these very large and very small sizes, a number of video and photographic observations were reduced by Ceplecha (2001). These fluxes were calculated (details are given in Ceplecha, 1988) taking the Halliday et al. (1984) MORP fireball fluxes, slightly corrected in mass, as a calibration, and adjusting the flux of small cameras to overlap with the number/mass relation from that work.

  8. Impact of New Camera Technologies on Discoveries in Cell Biology.

    Science.gov (United States)

    Stuurman, Nico; Vale, Ronald D

    2016-08-01

    New technologies can make previously invisible phenomena visible. Nowhere is this more obvious than in the field of light microscopy. Beginning with the observation of "animalcules" by Antonie van Leeuwenhoek, when he figured out how to achieve high magnification by shaping lenses, microscopy has advanced to this day by a continued march of discoveries driven by technical innovations. Recent advances in single-molecule-based technologies have achieved unprecedented resolution, and were the basis of the Nobel prize in Chemistry in 2014. In this article, we focus on developments in camera technologies and associated image processing that have been a major driver of technical innovations in light microscopy. We describe five types of developments in camera technology: video-based analog contrast enhancement, charge-coupled devices (CCDs), intensified sensors, electron multiplying gain, and scientific complementary metal-oxide-semiconductor cameras, which, together, have had major impacts in light microscopy. © 2016 Marine Biological Laboratory.

  9. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    Science.gov (United States)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  10. Virtual displays for 360-degree video

    Science.gov (United States)

    Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.

    2012-03-01

    In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.

  11. An Evaluation of Potential Operating Systems for Autonomous Underwater Vehicles

    Science.gov (United States)

    2013-02-01

    remote control of such vehicles requires the use of a tether , limiting the vehicle’s range; however operating underwater vehicles autonomously requires...URBI Universal Robot Body Interface UUV Unmanned Underwater Vehicle UNCLASSIFIED xi DSTO–TN–1194 UNCLASSIFIED THIS PAGE IS INTENTIONALLY BLANK xii... underwater environment, where many platforms are still reliant upon an umbilical tether for power and high bandwidth communications. This tether

  12. On the Performance of the Underwater Acoustic Sensor Networks

    Science.gov (United States)

    2015-05-01

    waves for Underwater Wireless Communication (UWC); radio waves, optical waves, and acoustic waves are few to name. Radio waves are good for extra low...2211 underwater communication , wireless sensors, mutual information REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR...Cotae, “On the Performance of the Underwater Wireless Communication Sensor Networks: Work in Progress” ASEE Mid-Atlantic Fall 2014 Conference

  13. Underwater laser beam welding of Alloy 690

    International Nuclear Information System (INIS)

    Hino, Takehisa; Tamura, Masataka; Kono, Wataru; Kawano, Shohei; Yoda, Masaki

    2009-01-01

    Stress Corrosion Clacking (SCC) has been reported at Alloy 600 welds between nozzles and safe-end in Pressurized Water Reactor (PWR) plant. Alloy 690, which has higher chromium content than Alloy 600, has been applied for cladding on Alloy 600 welds for repairing damaged SCC area. Toshiba has developed Underwater Laser Beam Welding technique. This method can be conducted without draining, so that the repairing period and the radiation exposure during the repair can be dramatically decreased. In some old PWRs, high-sulfur stainless steel is used as the materials for this section. It has a high susceptibility of weld cracks. Therefore, the optimum welding condition of Alloy 690 on the high-sulfur stainless steel was investigated with our Underwater Laser Beam Welding unit. Good cladding layer, without any crack, porosity or lack of fusion, could be obtained. (author)

  14. Cymbal and BB underwater transducers and arrays

    Energy Technology Data Exchange (ETDEWEB)

    Newnham, R.E.; Zhang, J.; Alkoy, S.; Meyer, R.; Hughes, W.J.; Hladky-Hennion, A.C.; Cochran, J.; Markley, D. [Materials Research Laboratory, Penn State University, University Park, PA 16802 (United States)

    2002-09-01

    The cymbal is a miniaturized class V flextensional transducer that was developed for use as a shallow water sound projector and receiver. Single elements are characterized by high Q, low efficiency, and medium power output capability. Its low cost and thin profile allow the transducer to be assembled into large flexible arrays. Efforts were made to model both single elements and arrays using the ATILA code and the integral equation formulation (EQI).Millimeter size microprobe hydrophones (BBs) have been designed and fabricated from miniature piezoelectric hollow ceramic spheres for underwater applications such as mapping acoustic fields of projectors, and flow noise sensors for complex underwater structures. Green spheres are prepared from soft lead zirconate titanate powders using a coaxial nozzle slurry process. A compact hydrophone with a radially-poled sphere is investigated using inside and outside electrodes. Characterization of these hydrophones is done through measurement of hydrostatic piezoelectric charge coefficients, free field voltage sensitivities and directivity beam patterns. (orig.)

  15. Underwater noise modelling for environmental impact assessment

    Energy Technology Data Exchange (ETDEWEB)

    Farcas, Adrian [Centre for Environment, Fisheries and Aquaculture Science (Cefas), Pakefield Road, Lowestoft, NR33 0HT (United Kingdom); Thompson, Paul M. [Lighthouse Field Station, Institute of Biological and Environmental Sciences, University of Aberdeen, Cromarty IV11 8YL (United Kingdom); Merchant, Nathan D., E-mail: nathan.merchant@cefas.co.uk [Centre for Environment, Fisheries and Aquaculture Science (Cefas), Pakefield Road, Lowestoft, NR33 0HT (United Kingdom)

    2016-02-15

    Assessment of underwater noise is increasingly required by regulators of development projects in marine and freshwater habitats, and noise pollution can be a constraining factor in the consenting process. Noise levels arising from the proposed activity are modelled and the potential impact on species of interest within the affected area is then evaluated. Although there is considerable uncertainty in the relationship between noise levels and impacts on aquatic species, the science underlying noise modelling is well understood. Nevertheless, many environmental impact assessments (EIAs) do not reflect best practice, and stakeholders and decision makers in the EIA process are often unfamiliar with the concepts and terminology that are integral to interpreting noise exposure predictions. In this paper, we review the process of underwater noise modelling and explore the factors affecting predictions of noise exposure. Finally, we illustrate the consequences of errors and uncertainties in noise modelling, and discuss future research needs to reduce uncertainty in noise assessments.

  16. Ocean Research Enabled by Underwater Gliders

    Science.gov (United States)

    Rudnick, Daniel L.

    2016-01-01

    Underwater gliders are autonomous underwater vehicles that profile vertically by changing their buoyancy and use wings to move horizontally. Gliders are useful for sustained observation at relatively fine horizontal scales, especially to connect the coastal and open ocean. In this review, research topics are grouped by time and length scales. Large-scale topics addressed include the eastern and western boundary currents and the regional effects of climate variability. The accessibility of horizontal length scales of order 1 km allows investigation of mesoscale and submesoscale features such as fronts and eddies. Because the submesoscales dominate vertical fluxes in the ocean, gliders have found application in studies of biogeochemical processes. At the finest scales, gliders have been used to measure internal waves and turbulent dissipation. The review summarizes gliders' achievements to date and assesses their future in ocean observation.

  17. CARVE: In-flight Videos from the CARVE Aircraft, Alaska, 2012-2015

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset contains videos captured by a camera mounted on the CARVE aircraft during airborne campaigns over the Alaskan and Canadian Arctic for the Carbon in...

  18. An Automatic Video Meteor Observation Using UFO Capture at the Showa Station

    Science.gov (United States)

    Fujiwara, Y.; Nakamura, T.; Ejiri, M.; Suzuki, H.

    2012-05-01

    The goal of our study is to clarify meteor activities in the southern hemi-sphere by continuous optical observations with video cameras with automatic meteor detection and recording at Syowa station, Antarctica.

  19. GPM GROUND VALIDATION TWO-DIMENSIONAL VIDEO DISDROMETER (2DVD) NSSTC V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The Two-dimensional Video Disdrometer (2DVD) uses two high speed line scan cameras which provide continuous measurements of size distribution, shape and fall...

  20. A MAC protocol for underwater sensors networks

    OpenAIRE

    Santos, Rodrigo; Orozco, Javier; Ochoa, Sergio; Meseguer Pallarès, Roc; Eggly, Gabriel

    2015-01-01

    “The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-26401-1_37." Underwater sensor networks are becoming an important field of research, because of its everyday increasing application scope. Examples of their application areas are environmental and pollution monitoring (mainly oil spills), oceanographic data collection, support for submarine geo-localization, ocean sampling and early tsunamis alert. It is well-known the challenge that represents to perfo...