WorldWideScience

Sample records for single-sensor stereo camera

  1. Prism-based single-camera system for stereo display

    Science.gov (United States)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  2. Multiocular image sensor with on-chip beam-splitter and inner meta-micro-lens for single-main-lens stereo camera.

    Science.gov (United States)

    Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa

    2016-08-08

    We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.

  3. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    Science.gov (United States)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  4. Low-cost, portable, robust and high-resolution single-camera stereo-DIC system and its application in high-temperature deformation measurements

    Science.gov (United States)

    Chi, Yuxi; Yu, Liping; Pan, Bing

    2018-05-01

    A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.

  5. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    Science.gov (United States)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  6. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    Science.gov (United States)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  7. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Gagliano, L.; Bryan, T.; MacLeod, T.

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASAs Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  8. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  9. Stereo Pinhole Camera: Assembly and experimental activities

    Directory of Open Access Journals (Sweden)

    Gilmário Barbosa Santos

    2015-05-01

    Full Text Available This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Furthermore, experiments are proposed by using the images obtained by the camera for 3D visualization through a pair of anaglyph glasses, and the estimation of relative depth by triangulation is discussed.

  10. Calibration of a Stereo Radiation Detection Camera Using Planar Homography

    Directory of Open Access Journals (Sweden)

    Seung-Hae Baek

    2016-01-01

    Full Text Available This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.

  11. Indoor and Outdoor Depth Imaging of Leaves With Time-of-Flight and Stereo Vision Sensors

    DEFF Research Database (Denmark)

    Kazmi, Wajahat; Foix, Sergi; Alenya, Guilliem

    2014-01-01

    In this article we analyze the response of Time-of-Flight (ToF) cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. ToF cameras are sensitive to ambient light and have low resolution but deliver...... poorly under sunlight. Stereo vision is comparatively more robust to ambient illumination and provides high resolution depth data but is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves...

  12. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor.

    Science.gov (United States)

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-Ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-03-05

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.

  13. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    Directory of Open Access Journals (Sweden)

    Heegwang Kim

    2017-12-01

    Full Text Available Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  14. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    Science.gov (United States)

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  15. Person and gesture tracking with smart stereo cameras

    Science.gov (United States)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems

  16. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor

    Science.gov (United States)

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-01-01

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes. PMID:29510599

  17. Effect of camera temperature variations on stereo-digital image correlation measurements

    KAUST Repository

    Pan, Bing

    2015-11-25

    In laboratory and especially non-laboratory stereo-digital image correlation (stereo-DIC) applications, the extrinsic and intrinsic parameters of the cameras used in the system may change slightly due to the camera warm-up effect and possible variations in ambient temperature. Because these camera parameters are generally calibrated once prior to measurements and considered to be unaltered during the whole measurement period, the changes in these parameters unavoidably induce displacement/strain errors. In this study, the effect of temperature variations on stereo-DIC measurements is investigated experimentally. To quantify the errors associated with camera or ambient temperature changes, surface displacements and strains of a stationary optical quartz glass plate with near-zero thermal expansion were continuously measured using a regular stereo-DIC system. The results confirm that (1) temperature variations in the cameras and ambient environment have a considerable influence on the displacements and strains measured by stereo-DIC due to the slightly altered extrinsic and intrinsic camera parameters; and (2) the corresponding displacement and strain errors correlate with temperature changes. For the specific stereo-DIC configuration used in this work, the temperature-induced strain errors were estimated to be approximately 30–50 με/°C. To minimize the adverse effect of camera temperature variations on stereo-DIC measurements, two simple but effective solutions are suggested.

  18. Effect of camera temperature variations on stereo-digital image correlation measurements

    KAUST Repository

    Pan, Bing; Shi, Wentao; Lubineau, Gilles

    2015-01-01

    In laboratory and especially non-laboratory stereo-digital image correlation (stereo-DIC) applications, the extrinsic and intrinsic parameters of the cameras used in the system may change slightly due to the camera warm-up effect and possible variations in ambient temperature. Because these camera parameters are generally calibrated once prior to measurements and considered to be unaltered during the whole measurement period, the changes in these parameters unavoidably induce displacement/strain errors. In this study, the effect of temperature variations on stereo-DIC measurements is investigated experimentally. To quantify the errors associated with camera or ambient temperature changes, surface displacements and strains of a stationary optical quartz glass plate with near-zero thermal expansion were continuously measured using a regular stereo-DIC system. The results confirm that (1) temperature variations in the cameras and ambient environment have a considerable influence on the displacements and strains measured by stereo-DIC due to the slightly altered extrinsic and intrinsic camera parameters; and (2) the corresponding displacement and strain errors correlate with temperature changes. For the specific stereo-DIC configuration used in this work, the temperature-induced strain errors were estimated to be approximately 30–50 με/°C. To minimize the adverse effect of camera temperature variations on stereo-DIC measurements, two simple but effective solutions are suggested.

  19. Three-Dimensional Stereo Reconstruction and Sensor Registration With Application to the Development of a Multi-Sensor Database

    National Research Council Canada - National Science Library

    Oberle, William

    2002-01-01

    ... and the transformations between the camera system and other sensor, vehicle, and world coordinate systems. Results indicate that the measured stereo and ladar data are susceptible to large errors that affect the accuracy of the calculated transformations.

  20. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    Science.gov (United States)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  1. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    Science.gov (United States)

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-01-01

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703

  2. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    Directory of Open Access Journals (Sweden)

    Liang Lu

    2018-03-01

    Full Text Available Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.

  3. First bulk and surface results for the ATLAS ITk stereo annulus sensors

    CERN Document Server

    Abidi, Syed Haider; The ATLAS collaboration; Bohm, Jan; Botte, James Michael; Ciungu, Bianca; Dette, Karola; Dolezal, Zdenek; Escobar, Carlos; Fadeyev, Vitaliy; Fernandez-Tejero, Xavi; Garcia-Argos, Carlos; Gillberg, Dag; Hara, Kazuhiko; Hunter, Robert Francis Holub

    2018-01-01

    A novel microstrip sensor geometry, the “stereo annulus”, has been developed for use in the end-cap of the ATLAS experiment’s strip tracker upgrade at the High-Luminosity Large Hadron Collider (HL- LHC). The radiation-hard, single-sided, ac-coupled, n + -in-p microstrip sensors are designed by the ITk Strip Sensor Collaboration and produced by Hamamatsu Photonics. The stereo annulus design has the potential to revolutionize the layout of end-cap microstrip trackers promising better tracking performance and more complete coverage than the contemporary configurations. These advantages are achieved by the union of equal length, radially oriented strips with a small stereo angle implemented directly into the sensor surface. The first-ever results for the stereo annulus geometry have been collected across several sites world- wide and are presented here. A number of full-size, unirradiated sensors were evaluated for their mechanical, bulk, and surface properties. The new device, the ATLAS12EC, is compared ag...

  4. Design and Implementation of a Novel Portable 360° Stereo Camera System with Low-Cost Action Cameras

    Science.gov (United States)

    Holdener, D.; Nebiker, S.; Blaser, S.

    2017-11-01

    The demand for capturing indoor spaces is rising with the digitalization trend in the construction industry. An efficient solution for measuring challenging indoor environments is mobile mapping. Image-based systems with 360° panoramic coverage allow a rapid data acquisition and can be processed to georeferenced 3D images hosted in cloud-based 3D geoinformation services. For the multiview stereo camera system presented in this paper, a 360° coverage is achieved with a layout consisting of five horizontal stereo image pairs in a circular arrangement. The design is implemented as a low-cost solution based on a 3D printed camera rig and action cameras with fisheye lenses. The fisheye stereo system is successfully calibrated with accuracies sufficient for the applied measurement task. A comparison of 3D distances with reference data delivers maximal deviations of 3 cm on typical distances in indoor space of 2-8 m. Also the automatic computation of coloured point clouds from the stereo pairs is demonstrated.

  5. Augmented reality glass-free three-dimensional display with the stereo camera

    Science.gov (United States)

    Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.

  6. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  7. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  8. Stereo matching based on SIFT descriptor with illumination and camera invariance

    Science.gov (United States)

    Niu, Haitao; Zhao, Xunjie; Li, Chengjin; Peng, Xiang

    2010-10-01

    Stereo matching is the process of finding corresponding points in two or more images. The description of interest points is a critical aspect of point correspondence which is vital in stereo matching. SIFT descriptor has been proven to be better on the distinctiveness and robustness than other local descriptors. However, SIFT descriptor does not involve color information of feature point which provides powerfully distinguishable feature in matching tasks. Furthermore, in a real scene, image color are affected by various geometric and radiometric factors,such as gamma correction and exposure. These situations are very common in stereo images. For this reason, the color recorded by a camera is not a reliable cue, and the color consistency assumption is no longer valid between stereo images in real scenes. Hence the performance of other SIFT-based stereo matching algorithms can be severely degraded under the radiometric variations. In this paper, we present a new improved SIFT stereo matching algorithms that is invariant to various radiometric variations between left and right images. Unlike other improved SIFT stereo matching algorithms, we explicitly employ the color formation model with the parameters of lighting geometry, illuminant color and camera gamma in SIFT descriptor. Firstly, we transform the input color images to log-chromaticity color space, thus a linear relationship can be established. Then, we use a log-polar histogram to build three color invariance components for SIFT descriptor. So that our improved SIFT descriptor is invariant to lighting geometry, illuminant color and camera gamma changes between left and right images. Then we can match feature points between two images and use SIFT descriptor Euclidean distance as a geometric measure in our data sets to make it further accurate and robust. Experimental results show that our method is superior to other SIFT-based algorithms including conventional stereo matching algorithms under various

  9. Low-cost small action cameras in stereo generates accurate underwater measurements of fish

    OpenAIRE

    Letessier, T. B.; Juhel, Jean-Baptiste; Vigliola, Laurent; Meeuwig, J. J.

    2015-01-01

    Small action cameras have received interest for use in underwater videography because of their low-cost, standardised housing, widespread availability and small size. Here, we assess the capacity of GoPro action cameras to provide accurate stereo-measurements of fish in comparison to the Sony handheld cameras that have traditionally been used for this purpose. Standardised stereo-GoPro and Sony systems were employed to capture measurements of known-length targets in a pool to explore the infl...

  10. APPLYING CCD CAMERAS IN STEREO PANORAMA SYSTEMS FOR 3D ENVIRONMENT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    A. Sh. Amini

    2012-07-01

    Full Text Available Proper recontruction of 3D environments is nowadays needed by many organizations and applications. In addition to conventional methods the use of stereo panoramas is an appropriate technique to use due to simplicity, low cost and the ability to view an environment the way it is in reality. This paper investigates the ability of applying stereo CCD cameras for 3D reconstruction and presentation of the environment and geometric measuring among that. For this purpose, a rotating stereo panorama was established using two CCDs with a base-length of 350 mm and a DVR (digital video recorder box. The stereo system was first calibrated using a 3D test-field and used to perform accurate measurements. The results of investigating the system in a real environment showed that although this kind of cameras produce noisy images and they do not have appropriate geometric stability, but they can be easily synchronized, well controlled and reasonable accuracy (about 40 mm in objects at 12 meters distance from the camera can be achieved.

  11. Single chip camera active pixel sensor

    Science.gov (United States)

    Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)

    2003-01-01

    A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.

  12. Infrared stereo calibration for unmanned ground vehicle navigation

    Science.gov (United States)

    Harguess, Josh; Strange, Shawn

    2014-06-01

    The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.

  13. A Self-Assessment Stereo Capture Model Applicable to the Internet of Things

    Science.gov (United States)

    Lin, Yancong; Yang, Jiachen; Lv, Zhihan; Wei, Wei; Song, Houbing

    2015-01-01

    The realization of the Internet of Things greatly depends on the information communication among physical terminal devices and informationalized platforms, such as smart sensors, embedded systems and intelligent networks. Playing an important role in information acquisition, sensors for stereo capture have gained extensive attention in various fields. In this paper, we concentrate on promoting such sensors in an intelligent system with self-assessment capability to deal with the distortion and impairment in long-distance shooting applications. The core design is the establishment of the objective evaluation criteria that can reliably predict shooting quality with different camera configurations. Two types of stereo capture systems—toed-in camera configuration and parallel camera configuration—are taken into consideration respectively. The experimental results show that the proposed evaluation criteria can effectively predict the visual perception of stereo capture quality for long-distance shooting. PMID:26308004

  14. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  15. Towards the Influence of a CAR Windshield on Depth Calculation with a Stereo Camera System

    Science.gov (United States)

    Hanel, A.; Hoegner, L.; Stilla, U.

    2016-06-01

    Stereo camera systems in cars are often used to estimate the distance of other road users from the car. This information is important to improve road safety. Such camera systems are typically mounted behind the windshield of the car. In this contribution, the influence of the windshield on the estimated distance values is analyzed. An offline stereo camera calibration is performed with a moving planar calibration target. In a standard procedure bundle adjustment, the relative orientation of the cameras is estimated. The calibration is performed for the identical stereo camera system with and without a windshield in between. The base lengths are derived from the relative orientation in both cases and are compared. Distance values are calculated and analyzed. It can be shown, that the difference of the base length values in the two cases is highly significant. Resulting effects on the distance calculation up to a half meter occur.

  16. First bulk and surface results for the ATLAS ITk Strip stereo annulus sensors

    CERN Document Server

    Hunter, Robert Francis Holub; The ATLAS collaboration; Affolder, Tony; Bohm, Jan; Botte, James Michael; Ciungu, Bianca; Dette, Karola; Dolezal, Zdenek; Escobar, Carlos; Fadeyev, Vitaliy

    2018-01-01

    A novel microstrip sensor geometry, the stereo annulus, has been developed for use in the end-cap of the ATLAS experiment's strip tracker upgrade at the HL-LHC. Its first implementation is in the ATLAS12EC sensors a large-area, radiation-hard, single-sided, ac-coupled, \

  17. Motorcycle That See: Multifocal Stereo Vision Sensor for Advanced Safety Systems in Tilting Vehicles

    Directory of Open Access Journals (Sweden)

    Gustavo Gil

    2018-01-01

    Full Text Available Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications.

  18. Motorcycles that See: Multifocal Stereo Vision Sensor for Advanced Safety Systems in Tilting Vehicles

    Science.gov (United States)

    2018-01-01

    Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications. PMID:29351267

  19. Motorcycle That See: Multifocal Stereo Vision Sensor for Advanced Safety Systems in Tilting Vehicles.

    Science.gov (United States)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-01-19

    Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications.

  20. People counting with stereo cameras : two template-based solutions

    NARCIS (Netherlands)

    Englebienne, Gwenn; van Oosterhout, Tim; Kröse, B.J.A.

    2012-01-01

    People counting is a challenging task with many applications. We propose a method with a fixed stereo camera that is based on projecting a template onto the depth image. The method was tested on a challenging outdoor dataset with good results and runs in real time.

  1. Full-parallax 3D display from stereo-hybrid 3D camera system

    Science.gov (United States)

    Hong, Seokmin; Ansari, Amir; Saavedra, Genaro; Martinez-Corral, Manuel

    2018-04-01

    In this paper, we propose an innovative approach for the production of the microimages ready to display onto an integral-imaging monitor. Our main contribution is using a stereo-hybrid 3D camera system, which is used for picking up a 3D data pair and composing a denser point cloud. However, there is an intrinsic difficulty in the fact that hybrid sensors have dissimilarities and therefore should be equalized. Handled data facilitate to generating an integral image after projecting computationally the information through a virtual pinhole array. We illustrate this procedure with some imaging experiments that provide microimages with enhanced quality. After projection of such microimages onto the integral-imaging monitor, 3D images are produced with great parallax and viewing angle.

  2. Polarizing aperture stereoscopic cinema camera

    Science.gov (United States)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  3. Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras

    DEFF Research Database (Denmark)

    Kristoffersen, Miklas Strøm; Dueholm, Jacob Velling; Gade, Rikke

    2016-01-01

    and the complexity of scenes with many people occluding one another. To address these challenges, this paper introduces the use of a stereo thermal camera setup for pedestrian counting. We investigate the reconstruction of 3D points in a pedestrian street with two thermal cameras and propose an algorithm......The number of pedestrians walking the streets or gathered in public spaces is a valuable piece of information for shop owners, city governments, event organizers and many others. However, automatic counting that takes place day and night is challenging due to changing lighting conditions...

  4. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  5. Design and Analysis of a Single—Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs

    Directory of Open Access Journals (Sweden)

    Carlos Jaramillo

    2016-02-01

    Full Text Available We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo vision system applied to Micro Aerial Vehicles (MAVs. The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration. We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads. The theoretical single viewpoint (SVP constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion. We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.

  6. Stereo pair design for cameras with a fovea

    Science.gov (United States)

    Chettri, Samir R.; Keefe, Michael; Zimmerman, John R.

    1992-01-01

    We describe the methodology for the design and selection of a stereo pair when the cameras have a greater concentration of sensing elements in the center of the image plane (fovea). Binocular vision is important for the purpose of depth estimation, which in turn is important in a variety of applications such as gaging and autonomous vehicle guidance. We assume that one camera has square pixels of size dv and the other has pixels of size rdv, where r is between 0 and 1. We then derive results for the average error, the maximum error, and the error distribution in the depth determination of a point. These results can be shown to be a general form of the results for the case when the cameras have equal sized pixels. We discuss the behavior of the depth estimation error as we vary r and the tradeoffs between the extra processing time and increased accuracy. Knowing these results makes it possible to study the case when we have a pair of cameras with a fovea.

  7. CMOS Imaging Sensor Technology for Aerial Mapping Cameras

    Science.gov (United States)

    Neumann, Klaus; Welzenbach, Martin; Timm, Martin

    2016-06-01

    In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  8. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras

    OpenAIRE

    Mur-Artal, Raul; Tardos, Juan D.

    2016-01-01

    We present ORB-SLAM2 a complete SLAM system for monocular, stereo and RGB-D cameras, including map reuse, loop closing and relocalization capabilities. The system works in real-time on standard CPUs in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end based on bundle adjustment with monocular and stereo observations allows for accurate trajectory estimation with metric scale. Our syst...

  9. Analysis of Camera Arrays Applicable to the Internet of Things.

    Science.gov (United States)

    Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing

    2016-03-22

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.

  10. Camera Calibration of Stereo Photogrammetric System with One-Dimensional Optical Reference Bar

    International Nuclear Information System (INIS)

    Xu, Q Y; Ye, D; Che, R S; Qi, X; Huang, Y

    2006-01-01

    To carry out the precise measurement of large-scale complex workpieces, accurately calibration of the stereo photogrammetric system has becoming more and more important. This paper proposed a flexible and reliable camera calibration of stereo photogrammetric system based on quaternion with one-dimensional optical reference bar, which has three small collinear infrared LED marks and the lengths between these marks have been precisely calibration. By moving the optical reference bar at a number of locations/orientations over the measurement volume, we calibrate the stereo photogrammetric systems with the geometric constraint of the optical reference bar. The extrinsic parameters calibration process consists of linear parameters estimation based on quaternion and nonlinear refinement based on the maximum likelihood criterion. Firstly, we linear estimate the extrinsic parameters of the stereo photogrameetric systems based on quaternion. Then with the quaternion results as the initial values, we refine the extrinsic parameters through maximum likelihood criterion with the Levenberg-Marquardt Algorithm. In the calibration process, we can automatically control the light intensity and optimize the exposure time to get uniform intensity profile of the image points at different distance and obtain higher S/N ratio. The experiment result proves that the calibration method proposed is flexible, valid and obtains good results in the application

  11. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    Science.gov (United States)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  12. Multi-camera synchronization core implemented on USB3 based FPGA platform

    Science.gov (United States)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  13. A Quality Evaluation of Single and Multiple Camera Calibration Approaches for an Indoor Multi Camera Tracking System

    Directory of Open Access Journals (Sweden)

    M. Adduci

    2014-06-01

    Full Text Available Human detection and tracking has been a prominent research area for several scientists around the globe. State of the art algorithms have been implemented, refined and accelerated to significantly improve the detection rate and eliminate false positives. While 2D approaches are well investigated, 3D human detection and tracking is still an unexplored research field. In both 2D/3D cases, introducing a multi camera system could vastly expand the accuracy and confidence of the tracking process. Within this work, a quality evaluation is performed on a multi RGB-D camera indoor tracking system for examining how camera calibration and pose can affect the quality of human tracks in the scene, independently from the detection and tracking approach used. After performing a calibration step on every Kinect sensor, state of the art single camera pose estimators were evaluated for checking how good the quality of the poses is estimated using planar objects such as an ordinate chessboard. With this information, a bundle block adjustment and ICP were performed for verifying the accuracy of the single pose estimators in a multi camera configuration system. Results have shown that single camera estimators provide high accuracy results of less than half a pixel forcing the bundle to converge after very few iterations. In relation to ICP, relative information between cloud pairs is more or less preserved giving a low score of fitting between concatenated pairs. Finally, sensor calibration proved to be an essential step for achieving maximum accuracy in the generated point clouds, and therefore in the accuracy of the produced 3D trajectories, from each sensor.

  14. Markerless Augmented Reality via Stereo Video See-Through Head-Mounted Display Device

    Directory of Open Access Journals (Sweden)

    Chung-Hung Hsieh

    2015-01-01

    Full Text Available Conventionally, the camera localization for augmented reality (AR relies on detecting a known pattern within the captured images. In this study, a markerless AR scheme has been designed based on a Stereo Video See-Through Head-Mounted Display (HMD device. The proposed markerless AR scheme can be utilized for medical applications such as training, telementoring, or preoperative explanation. Firstly, a virtual model for AR visualization is aligned to the target in physical space by an improved Iterative Closest Point (ICP based surface registration algorithm, with the target surface structure reconstructed by a stereo camera pair; then, a markerless AR camera localization method is designed based on the Kanade-Lucas-Tomasi (KLT feature tracking algorithm and the Random Sample Consensus (RANSAC correction algorithm. Our AR camera localization method is shown to be better than the traditional marker-based and sensor-based AR environment. The demonstration system was evaluated with a plastic dummy head and the display result is satisfactory for a multiple-view observation.

  15. Operational modal analysis on a VAWT in a large wind tunnel using stereo vision technique

    International Nuclear Information System (INIS)

    Najafi, Nadia; Paulsen, Uwe Schmidt

    2017-01-01

    This paper is about development and use of a research based stereo vision system for vibration and operational modal analysis on a parked, 1-kW, 3-bladed vertical axis wind turbine (VAWT), tested in a wind tunnel at high wind. Vibrations were explored experimentally by tracking small deflections of the markers on the structure with two cameras, and also numerically, to study structural vibrations in an overall objective to investigate challenges and to prove the capability of using stereo vision. Two high speed cameras provided displacement measurements at no wind speed interference. The displacement time series were obtained using a robust image processing algorithm and analyzed with data-driven stochastic subspace identification (DD-SSI) method. In addition of exploring structural behaviour, the VAWT testing gave us the possibility to study aerodynamic effects at Reynolds number of approximately 2 × 10"5. VAWT dynamics were simulated using HAWC2. The stereo vision results and HAWC2 simulations agree within 4% except for mode 3 and 4. The high aerodynamic damping of one of the blades, in flatwise motion, would explain the gap between those two modes from simulation and stereo vision. A set of conventional sensors, such as accelerometers and strain gauges, are also measuring rotor vibration during the experiment. The spectral analysis of the output signals of the conventional sensors agrees the stereo vision results within 4% except for mode 4 which is due to the inaccuracy of spectral analysis in picking very closely spaced modes. Finally, the uncertainty of the 3D displacement measurement was evaluated by applying a generalized method based on the law of error propagation, for a linear camera model of the stereo vision system. - Highlights: • The stereo vision technique is used to track deflections on a VAWT in the wind tunnel. • OMA is applied on displacement time series to study the dynamic behaviour of the VAWT. • Stereo vision results enabled us to

  16. Quantifying geological processes on Mars - Results of the high resolution stereo camera (HRSC) on Mars express

    NARCIS (Netherlands)

    Jaumann, R.; Tirsch, D.; Hauber, E.; Ansan, V.; Di Achille, G.; Erkeling, G.; Fueten, F.; Head, J.; Kleinhans, M. G.; Mangold, N.; Michael, G. G.; Neukum, G.; Pacifici, A.; Platz, T.; Pondrelli, M.; Raack, J.; Reiss, D.; Williams, D. A.; Adeli, S.; Baratoux, D.; De Villiers, G.; Foing, B.; Gupta, S.; Gwinner, K.; Hiesinger, H.; Hoffmann, H.; Deit, L. Le; Marinangeli, L.; Matz, K. D.; Mertens, V.; Muller, J. P.; Pasckert, J. H.; Roatsch, T.; Rossi, A. P.; Scholten, F.; Sowe, M.; Voigt, J.; Warner, N.

    2015-01-01

    Abstract This review summarizes the use of High Resolution Stereo Camera (HRSC) data as an instrumental tool and its application in the analysis of geological processes and landforms on Mars during the last 10 years of operation. High-resolution digital elevations models on a local to regional scale

  17. Construction of a frameless camera-based stereotactic neuronavigator.

    Science.gov (United States)

    Cornejo, A; Algorri, M E

    2004-01-01

    We built an infrared vision system to be used as the real time 3D motion sensor in a prototype low cost, high precision, frameless neuronavigator. The objective of the prototype is to develop accessible technology for increased availability of neuronavigation systems in research labs and small clinics and hospitals. We present our choice of technology including camera and IR emitter characteristics. We describe the methodology for setting up the 3D motion sensor, from the arrangement of the cameras and the IR emitters on surgical instruments, to triangulation equations from stereo camera pairs, high bandwidth computer communication with the cameras and real time image processing algorithms. We briefly cover the issues of camera calibration and characterization. Although our performance results do not yet fully meet the high precision, real time requirements of neuronavigation systems we describe the current improvements being made to the 3D motion sensor that will make it suitable for surgical applications.

  18. A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors.

    Science.gov (United States)

    Song, Yu; Nuske, Stephen; Scherer, Sebastian

    2016-12-22

    State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.

  19. Multiview photometric stereo.

    Science.gov (United States)

    Hernández Esteban, Carlos; Vogiatzis, George; Cipolla, Roberto

    2008-03-01

    This paper addresses the problem of obtaining complete, detailed reconstructions of textureless shiny objects. We present an algorithm which uses silhouettes of the object, as well as images obtained under changing illumination conditions. In contrast with previous photometric stereo techniques, ours is not limited to a single viewpoint but produces accurate reconstructions in full 3D. A number of images of the object are obtained from multiple viewpoints, under varying lighting conditions. Starting from the silhouettes, the algorithm recovers camera motion and constructs the object's visual hull. This is then used to recover the illumination and initialise a multi-view photometric stereo scheme to obtain a closed surface reconstruction. There are two main contributions in this paper: Firstly we describe a robust technique to estimate light directions and intensities and secondly, we introduce a novel formulation of photometric stereo which combines multiple viewpoints and hence allows closed surface reconstructions. The algorithm has been implemented as a practical model acquisition system. Here, a quantitative evaluation of the algorithm on synthetic data is presented together with complete reconstructions of challenging real objects. Finally, we show experimentally how even in the case of highly textured objects, this technique can greatly improve on correspondence-based multi-view stereo results.

  20. CRED Fish Observations from Stereo Video Cameras on a SeaBED AUV collected around Tutuila, American Samoa in 2012

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Black and white imagery were collected using a stereo pair of underwater video cameras mounted on a SeaBED autonomous underwater vehicle (AUV) and deployed around...

  1. Multiple Moving Obstacles Avoidance of Service Robot using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Achmad Jazidie

    2011-12-01

    Full Text Available In this paper, we propose a multiple moving obstacles avoidance using stereo vision for service robots in indoor environments. We assume that this model of service robot is used to deliver a cup to the recognized customer from the starting point to the destination. The contribution of this research is a new method for multiple moving obstacle avoidance with Bayesian approach using stereo camera. We have developed and introduced 3 main modules to recognize faces, to identify multiple moving obstacles and to maneuver of robot. A group of people who is walking will be tracked as a multiple moving obstacle, and the speed, direction, and distance of the moving obstacles is estimated by a stereo camera in order that the robot can maneuver to avoid the collision. To overcome the inaccuracies of vision sensor, Bayesian approach is used for estimate the absense and direction of obstacles. We present the results of the experiment of the service robot called Srikandi III which uses our proposed method and we also evaluate its performance. Experiments shown that our proposed method working well, and Bayesian approach proved increasing the estimation perform for absence and direction of moving obstacle.

  2. Integrated multi sensors and camera video sequence application for performance monitoring in archery

    Science.gov (United States)

    Taha, Zahari; Arif Mat-Jizat, Jessnor; Amirul Abdullah, Muhammad; Muazu Musa, Rabiu; Razali Abdullah, Mohamad; Fauzi Ibrahim, Mohamad; Hanafiah Shaharudin, Mohd Ali

    2018-03-01

    This paper explains the development of a comprehensive archery performance monitoring software which consisted of three camera views and five body sensors. The five body sensors evaluate biomechanical related variables of flexor and extensor muscle activity, heart rate, postural sway and bow movement during archery performance. The three camera views with the five body sensors are integrated into a single computer application which enables the user to view all the data in a single user interface. The five body sensors’ data are displayed in a numerical and graphical form in real-time. The information transmitted by the body sensors are computed with an embedded algorithm that automatically transforms the summary of the athlete’s biomechanical performance and displays in the application interface. This performance will be later compared to the pre-computed psycho-fitness performance from the prefilled data into the application. All the data; camera views, body sensors; performance-computations; are recorded for further analysis by a sports scientist. Our developed application serves as a powerful tool for assisting the coach and athletes to observe and identify any wrong technique employ during training which gives room for correction and re-evaluation to improve overall performance in the sport of archery.

  3. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.

    2011-01-01

    The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

  4. Note: A disposable x-ray camera based on mass produced complementary metal-oxide-semiconductor sensors and single-board computers

    Energy Technology Data Exchange (ETDEWEB)

    Hoidn, Oliver R.; Seidler, Gerald T., E-mail: seidler@uw.edu [Physics Department, University of Washington, Seattle, Washington 98195 (United States)

    2015-08-15

    We have integrated mass-produced commercial complementary metal-oxide-semiconductor (CMOS) image sensors and off-the-shelf single-board computers into an x-ray camera platform optimized for acquisition of x-ray spectra and radiographs at energies of 2–6 keV. The CMOS sensor and single-board computer are complemented by custom mounting and interface hardware that can be easily acquired from rapid prototyping services. For single-pixel detection events, i.e., events where the deposited energy from one photon is substantially localized in a single pixel, we establish ∼20% quantum efficiency at 2.6 keV with ∼190 eV resolution and a 100 kHz maximum detection rate. The detector platform’s useful intrinsic energy resolution, 5-μm pixel size, ease of use, and obvious potential for parallelization make it a promising candidate for many applications at synchrotron facilities, in laser-heating plasma physics studies, and in laboratory-based x-ray spectrometry.

  5. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    Science.gov (United States)

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  6. Satellite markers: a simple method for ground truth car pose on stereo video

    Science.gov (United States)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-04-01

    Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.

  7. Railway clearance intrusion detection method with binocular stereo vision

    Science.gov (United States)

    Zhou, Xingfang; Guo, Baoqing; Wei, Wei

    2018-03-01

    In the stage of railway construction and operation, objects intruding railway clearance greatly threaten the safety of railway operation. Real-time intrusion detection is of great importance. For the shortcomings of depth insensitive and shadow interference of single image method, an intrusion detection method with binocular stereo vision is proposed to reconstruct the 3D scene for locating the objects and judging clearance intrusion. The binocular cameras are calibrated with Zhang Zhengyou's method. In order to improve the 3D reconstruction speed, a suspicious region is firstly determined by background difference method of a single camera's image sequences. The image rectification, stereo matching and 3D reconstruction process are only executed when there is a suspicious region. A transformation matrix from Camera Coordinate System(CCS) to Track Coordinate System(TCS) is computed with gauge constant and used to transfer the 3D point clouds into the TCS, then the 3D point clouds are used to calculate the object position and intrusion in TCS. The experiments in railway scene show that the position precision is better than 10mm. It is an effective way for clearance intrusion detection and can satisfy the requirement of railway application.

  8. Image synchronization for 3D application using the NanEye sensor

    Science.gov (United States)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  9. Establishing imaging sensor specifications for digital still cameras

    Science.gov (United States)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  10. Stereo Vision Inside Tire

    Science.gov (United States)

    2015-08-21

    1 Stereo Vision Inside Tire P.S. Els C.M. Becker University of Pretoria W911NF-14-1-0590 Final...Stereo Vision Inside Tire 5a. CONTRACT NUMBER W911NF-14-1-0590 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Prof PS Els CM...on the development of a stereo vision system that can be mounted inside a rolling tire , known as T2-CAM for Tire -Terrain CAMera. The T2-CAM system

  11. Stereo camera based virtual cane system with identifiable distance tactile feedback for the blind.

    Science.gov (United States)

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-06-13

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.

  12. The KCLBOT: Exploiting RGB-D Sensor Inputs for Navigation Environment Building and Mobile Robot Localization

    Directory of Open Access Journals (Sweden)

    Evangelos Georgiou

    2011-09-01

    Full Text Available This paper presents an alternative approach to implementing a stereo camera configuration for SLAM. The approach suggested implements a simplified method using a single RGB-D camera sensor mounted on a maneuverable non-holonomic mobile robot, the KCLBOT, used for extracting image feature depth information while maneuvering. Using a defined quadratic equation, based on the calibration of the camera, a depth computation model is derived base on the HSV color space map. Using this methodology it is possible to build navigation environment maps and carry out autonomous mobile robot path following and obstacle avoidance. This paper presents a calculation model which enables the distance estimation using the RGB-D sensor from Microsoft .NET micro framework device. Experimental results are presented to validate the distance estimation methodology.

  13. Research on robot navigation vision sensor based on grating projection stereo vision

    Science.gov (United States)

    Zhang, Xiaoling; Luo, Yinsheng; Lin, Yuchi; Zhu, Lei

    2016-10-01

    A novel visual navigation method based on grating projection stereo vision for mobile robot in dark environment is proposed. This method is combining with grating projection profilometry of plane structured light and stereo vision technology. It can be employed to realize obstacle detection, SLAM (Simultaneous Localization and Mapping) and vision odometry for mobile robot navigation in dark environment without the image match in stereo vision technology and without phase unwrapping in the grating projection profilometry. First, we research the new vision sensor theoretical, and build geometric and mathematical model of the grating projection stereo vision system. Second, the computational method of 3D coordinates of space obstacle in the robot's visual field is studied, and then the obstacles in the field is located accurately. The result of simulation experiment and analysis shows that this research is useful to break the current autonomous navigation problem of mobile robot in dark environment, and to provide the theoretical basis and exploration direction for further study on navigation of space exploring robot in the dark and without GPS environment.

  14. Stereo and IMU-Assisted Visual Odometry for Small Robots

    Science.gov (United States)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  15. Using Stereo Vision to Support the Automated Analysis of Surveillance Videos

    Science.gov (United States)

    Menze, M.; Muhle, D.

    2012-07-01

    Video surveillance systems are no longer a collection of independent cameras, manually controlled by human operators. Instead, smart sensor networks are developed, able to fulfil certain tasks on their own and thus supporting security personnel by automated analyses. One well-known task is the derivation of people's positions on a given ground plane from monocular video footage. An improved accuracy for the ground position as well as a more detailed representation of single salient people can be expected from a stereoscopic processing of overlapping views. Related work mostly relies on dedicated stereo devices or camera pairs with a small baseline. While this set-up is helpful for the essential step of image matching, the high accuracy potential of a wide baseline and the according good intersection geometry is not utilised. In this paper we present a stereoscopic approach, working on overlapping views of standard pan-tilt-zoom cameras which can easily be generated for arbitrary points of interest by an appropriate reconfiguration of parts of a sensor network. Experiments are conducted on realistic surveillance footage to show the potential of the suggested approach and to investigate the influence of different baselines on the quality of the derived surface model. Promising estimations of people's position and height are retrieved. Although standard matching approaches show helpful results, future work will incorporate temporal dependencies available from image sequences in order to reduce computational effort and improve the derived level of detail.

  16. USING STEREO VISION TO SUPPORT THE AUTOMATED ANALYSIS OF SURVEILLANCE VIDEOS

    Directory of Open Access Journals (Sweden)

    M. Menze

    2012-07-01

    Full Text Available Video surveillance systems are no longer a collection of independent cameras, manually controlled by human operators. Instead, smart sensor networks are developed, able to fulfil certain tasks on their own and thus supporting security personnel by automated analyses. One well-known task is the derivation of people’s positions on a given ground plane from monocular video footage. An improved accuracy for the ground position as well as a more detailed representation of single salient people can be expected from a stereoscopic processing of overlapping views. Related work mostly relies on dedicated stereo devices or camera pairs with a small baseline. While this set-up is helpful for the essential step of image matching, the high accuracy potential of a wide baseline and the according good intersection geometry is not utilised. In this paper we present a stereoscopic approach, working on overlapping views of standard pan-tilt-zoom cameras which can easily be generated for arbitrary points of interest by an appropriate reconfiguration of parts of a sensor network. Experiments are conducted on realistic surveillance footage to show the potential of the suggested approach and to investigate the influence of different baselines on the quality of the derived surface model. Promising estimations of people’s position and height are retrieved. Although standard matching approaches show helpful results, future work will incorporate temporal dependencies available from image sequences in order to reduce computational effort and improve the derived level of detail.

  17. An autonomous sensor module based on a legacy CCTV camera

    Science.gov (United States)

    Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.

    2016-10-01

    A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.

  18. Single photon detection and localization accuracy with an ebCMOS camera

    Energy Technology Data Exchange (ETDEWEB)

    Cajgfinger, T. [CNRS/IN2P3, Institut de Physique Nucléaire de Lyon, Villeurbanne F-69622 (France); Dominjon, A., E-mail: agnes.dominjon@nao.ac.jp [Université de Lyon, Université de Lyon 1, Lyon 69003 France. (France); Barbier, R. [CNRS/IN2P3, Institut de Physique Nucléaire de Lyon, Villeurbanne F-69622 (France); Université de Lyon, Université de Lyon 1, Lyon 69003 France. (France)

    2015-07-01

    The CMOS sensor technologies evolve very fast and offer today very promising solutions to existing issues facing by imaging camera systems. CMOS sensors are very attractive for fast and sensitive imaging thanks to their low pixel noise (1e-) and their possibility of backside illumination. The ebCMOS group of IPNL has produced a camera system dedicated to Low Light Level detection and based on a 640 kPixels ebCMOS with its acquisition system. After reminding the principle of detection of an ebCMOS and the characteristics of our prototype, we confront our camera to other imaging systems. We compare the identification efficiency and the localization accuracy of a point source by four different photo-detection devices: the scientific CMOS (sCMOS), the Charge Coupled Device (CDD), the Electron Multiplying CCD (emCCD) and the Electron Bombarded CMOS (ebCMOS). Our ebCMOS camera is able to identify a single photon source in less than 10 ms with a localization accuracy better than 1 µm. We report as well efficiency measurement and the false positive identification of the ebCMOS camera by identifying more than hundreds of single photon sources in parallel. About 700 spots are identified with a detection efficiency higher than 90% and a false positive percentage lower than 5. With these measurements, we show that our target tracking algorithm can be implemented in real time at 500 frames per second under a photon flux of the order of 8000 photons per frame. These results demonstrate that the ebCMOS camera concept with its single photon detection and target tracking algorithm is one of the best devices for low light and fast applications such as bioluminescence imaging, quantum dots tracking or adaptive optics.

  19. The High Resolution Stereo Camera (HRSC) of Mars Express and its approach to science analysis and mapping for Mars and its satellites

    Science.gov (United States)

    Gwinner, K.; Jaumann, R.; Hauber, E.; Hoffmann, H.; Heipke, C.; Oberst, J.; Neukum, G.; Ansan, V.; Bostelmann, J.; Dumke, A.; Elgner, S.; Erkeling, G.; Fueten, F.; Hiesinger, H.; Hoekzema, N. M.; Kersten, E.; Loizeau, D.; Matz, K.-D.; McGuire, P. C.; Mertens, V.; Michael, G.; Pasewaldt, A.; Pinet, P.; Preusker, F.; Reiss, D.; Roatsch, T.; Schmidt, R.; Scholten, F.; Spiegel, M.; Stesky, R.; Tirsch, D.; van Gasselt, S.; Walter, S.; Wählisch, M.; Willner, K.

    2016-07-01

    The High Resolution Stereo Camera (HRSC) of ESA's Mars Express is designed to map and investigate the topography of Mars. The camera, in particular its Super Resolution Channel (SRC), also obtains images of Phobos and Deimos on a regular basis. As HRSC is a push broom scanning instrument with nine CCD line detectors mounted in parallel, its unique feature is the ability to obtain along-track stereo images and four colors during a single orbital pass. The sub-pixel accuracy of 3D points derived from stereo analysis allows producing DTMs with grid size of up to 50 m and height accuracy on the order of one image ground pixel and better, as well as corresponding orthoimages. Such data products have been produced systematically for approximately 40% of the surface of Mars so far, while global shape models and a near-global orthoimage mosaic could be produced for Phobos. HRSC is also unique because it bridges between laser altimetry and topography data derived from other stereo imaging instruments, and provides geodetic reference data and geological context to a variety of non-stereo datasets. This paper, in addition to an overview of the status and evolution of the experiment, provides a review of relevant methods applied for 3D reconstruction and mapping, and respective achievements. We will also review the methodology of specific approaches to science analysis based on joint analysis of DTM and orthoimage information, or benefitting from high accuracy of co-registration between multiple datasets, such as studies using multi-temporal or multi-angular observations, from the fields of geomorphology, structural geology, compositional mapping, and atmospheric science. Related exemplary results from analysis of HRSC data will be discussed. After 10 years of operation, HRSC covered about 70% of the surface by panchromatic images at 10-20 m/pixel, and about 97% at better than 100 m/pixel. As the areas with contiguous coverage by stereo data are increasingly abundant, we also

  20. A multi-modal stereo microscope based on a spatial light modulator.

    Science.gov (United States)

    Lee, M P; Gibson, G M; Bowman, R; Bernet, S; Ritsch-Marte, M; Phillips, D B; Padgett, M J

    2013-07-15

    Spatial Light Modulators (SLMs) can emulate the classic microscopy techniques, including differential interference (DIC) contrast and (spiral) phase contrast. Their programmability entails the benefit of flexibility or the option to multiplex images, for single-shot quantitative imaging or for simultaneous multi-plane imaging (depth-of-field multiplexing). We report the development of a microscope sharing many of the previously demonstrated capabilities, within a holographic implementation of a stereo microscope. Furthermore, we use the SLM to combine stereo microscopy with a refocusing filter and with a darkfield filter. The instrument is built around a custom inverted microscope and equipped with an SLM which gives various imaging modes laterally displaced on the same camera chip. In addition, there is a wide angle camera for visualisation of a larger region of the sample.

  1. INTEGRATED GEOREFERENCING OF STEREO IMAGE SEQUENCES CAPTURED WITH A STEREOVISION MOBILE MAPPING SYSTEM – APPROACHES AND PRACTICAL RESULTS

    Directory of Open Access Journals (Sweden)

    H. Eugster

    2012-07-01

    Full Text Available Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations – in our case of the imaging sensors – normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.

  2. Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind

    Directory of Open Access Journals (Sweden)

    Donghun Kim

    2014-06-01

    Full Text Available In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user’s pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.

  3. Stereo Vision-Based High Dynamic Range Imaging Using Differently-Exposed Image Pair

    Directory of Open Access Journals (Sweden)

    Won-Jae Park

    2017-06-01

    Full Text Available In this paper, a high dynamic range (HDR imaging method based on the stereo vision system is presented. The proposed method uses differently exposed low dynamic range (LDR images captured from a stereo camera. The stereo LDR images are first converted to initial stereo HDR images using the inverse camera response function estimated from the LDR images. However, due to the limited dynamic range of the stereo LDR camera, the radiance values in under/over-exposed regions of the initial main-view (MV HDR image can be lost. To restore these radiance values, the proposed stereo matching and hole-filling algorithms are applied to the stereo HDR images. Specifically, the auxiliary-view (AV HDR image is warped by using the estimated disparity between initial the stereo HDR images and then effective hole-filling is applied to the warped AV HDR image. To reconstruct the final MV HDR, the warped and hole-filled AV HDR image is fused with the initial MV HDR image using the weight map. The experimental results demonstrate objectively and subjectively that the proposed stereo HDR imaging method provides better performance compared to the conventional method.

  4. Digital stereoscopic photography using StereoData Maker

    Science.gov (United States)

    Toeppen, John; Sykes, David

    2009-02-01

    Stereoscopic digital photography has become much more practical with the use of USB wired connections between a pair of Canon cameras using StereoData Maker software for precise synchronization. StereoPhoto Maker software is now used to automatically combine and align right and left image files to produce a stereo pair. Side by side images are saved as pairs and may be viewed using software that converts the images into the preferred viewing format at the time of display. Stereo images may be shared on the internet, displayed on computer monitors, autostereo displays, viewed on high definition 3D TVs, or projected for a group. Stereo photographers are now free to control composition using point and shoot settings, or are able to control shutter speed, aperture, focus, ISO, and zoom. The quality of the output depends on the developed skills of the photographer as well as their understanding of the software, human vision and the geometry they choose for their cameras and subjects. Observers of digital stereo images can zoom in for greater detail and scroll across large panoramic fields with a few keystrokes. The art, science, and methods of taking, creating and viewing digital stereo photos are presented in a historic and developmental context in this paper.

  5. The Bubble Box: Towards an Automated Visual Sensor for 3D Analysis and Characterization of Marine Gas Release Sites

    Directory of Open Access Journals (Sweden)

    Anne Jordt

    2015-12-01

    Full Text Available Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information.

  6. Visible Watermarking Technique Based on Human Visual System for Single Sensor Digital Cameras

    Directory of Open Access Journals (Sweden)

    Hector Santoyo-Garcia

    2017-01-01

    Full Text Available In this paper we propose a visible watermarking algorithm, in which a visible watermark is embedded into the Bayer Colour Filter Array (CFA domain. The Bayer CFA is the most common raw image representation for images captured by single sensor digital cameras equipped in almost all mobile devices. In proposed scheme, the captured image is watermarked before it is compressed and stored in the storage system. Then this method enforces the rightful ownership of the watermarked image, since there is no other version of the image rather than the watermarked one. We also take into consideration the Human Visual System (HVS so that the proposed technique provides desired characteristics of a visible watermarking scheme, such that the embedded watermark is sufficiently perceptible and at same time not obtrusive in colour and grey-scale images. Unlike other Bayer CFA domain visible watermarking algorithms, in which only binary watermark pattern is supported, proposed watermarking algorithm allows grey-scale and colour images as watermark patterns. It is suitable for advertisement purpose, such as digital library and e-commerce, besides copyright protection.

  7. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellut, Paolo; Sherwin, Gary

    2011-01-01

    TIR cameras can be used for day/night Unmanned Ground Vehicle (UGV) autonomous navigation when stealth is required. The quality of uncooled TIR cameras has significantly improved over the last decade, making them a viable option at low speed Limiting factors for stereo ranging with uncooled LWIR cameras are image blur and low texture scenes TIR perception capabilities JPL has explored includes: (1) single and dual band TIR terrain classification (2) obstacle detection (pedestrian, vehicle, tree trunks, ditches, and water) (3) perception thru obscurants

  8. A time-resolved image sensor for tubeless streak cameras

    Science.gov (United States)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  9. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  10. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Pablo Ramon Soria

    2017-01-01

    Full Text Available The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors.

  11. Development of Single Optical Sensor Method for the Measurement Droplet Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Ho; Ahn, Tae Hwan; Yun, Byong Jo [Pusan National University, Busan (Korea, Republic of); Bae, Byoung Uhn; Kim, Kyoung Doo [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In this study, we tried to develop single optical fiber probe(S-TOP) sensor method to measure droplet parameters such as diameter, droplet fraction, and droplet velocity and so on. To calibrate and confirm the optical fiber sensor for those parameters, we conducted visualization experiments by using a high speed camera with the optical sensor. To evaluate the performance of the S-TOP accurately, we repeated calibration experiments at a given droplet flow condition. Figure. 3 shows the result of the calibration. In this graph, the x axis is the droplet velocity measured by visualization and the y axis is grd, D which is obtained from S-TOP. In this study, we have developed the single tip optical probe sensor to measure the droplet parameters. From the calibration experiments with high speed camera, we get the calibration curve for the droplet velocity. Additionally, the chord length distribution of droplets is measured by the optical probe.

  12. Development of Single Optical Sensor Method for the Measurement Droplet Parameters

    International Nuclear Information System (INIS)

    Kim, Tae Ho; Ahn, Tae Hwan; Yun, Byong Jo; Bae, Byoung Uhn; Kim, Kyoung Doo

    2016-01-01

    In this study, we tried to develop single optical fiber probe(S-TOP) sensor method to measure droplet parameters such as diameter, droplet fraction, and droplet velocity and so on. To calibrate and confirm the optical fiber sensor for those parameters, we conducted visualization experiments by using a high speed camera with the optical sensor. To evaluate the performance of the S-TOP accurately, we repeated calibration experiments at a given droplet flow condition. Figure. 3 shows the result of the calibration. In this graph, the x axis is the droplet velocity measured by visualization and the y axis is grd, D which is obtained from S-TOP. In this study, we have developed the single tip optical probe sensor to measure the droplet parameters. From the calibration experiments with high speed camera, we get the calibration curve for the droplet velocity. Additionally, the chord length distribution of droplets is measured by the optical probe.

  13. Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.

    Science.gov (United States)

    Rumei Zhang; Hao Liu; Jianda Han

    2017-07-01

    Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.

  14. CMOS Image Sensors: Electronic Camera On A Chip

    Science.gov (United States)

    Fossum, E. R.

    1995-01-01

    Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.

  15. Stereo-panoramic Data

    KAUST Repository

    Cutchin, Steve

    2013-01-01

    Systems and methods for automatically generating three-dimensional panoramic images for use in various virtual reality settings are disclosed. One embodiment of the system includes a stereo camera capture device (SCD), a programmable camera controller (PCC) that rotates, orients, and controls the SCD, a robotic maneuvering platform (RMP), and a path and adaptation controller (PAC). In that embodiment, the PAC determines the movement of the system based on an original desired path and input gathered from the SCD during an image capture process.

  16. Stereo-panoramic Data

    KAUST Repository

    Cutchin, Steve

    2013-03-07

    Systems and methods for automatically generating three-dimensional panoramic images for use in various virtual reality settings are disclosed. One embodiment of the system includes a stereo camera capture device (SCD), a programmable camera controller (PCC) that rotates, orients, and controls the SCD, a robotic maneuvering platform (RMP), and a path and adaptation controller (PAC). In that embodiment, the PAC determines the movement of the system based on an original desired path and input gathered from the SCD during an image capture process.

  17. A novel single-step procedure for the calibration of the mounting parameters of a multi-camera terrestrial mobile mapping system

    Science.gov (United States)

    Habib, A.; Kersting, P.; Bang, K.; Rau, J.

    2011-12-01

    Mobile Mapping Systems (MMS) can be defined as moving platforms which integrates a set of imaging sensors and a position and orientation system (POS) for the collection of geo-spatial information. In order to fully explore the potential accuracy of such systems and guarantee accurate multi-sensor integration, a careful system calibration must be carried out. System calibration involves individual sensor calibration as well as the estimation of the inter-sensor geometric relationship. This paper tackles a specific component of the system calibration process of a multi-camera MMS - the estimation of the relative orientation parameters among the cameras, i.e., the inter-camera geometric relationship (lever-arm offsets and boresight angles among the cameras). For that purpose, a novel single step procedure, which is easy to implement and not computationally intensive, will be introduced. The proposed method is implemented in such a way that it can also be used for the estimation of the mounting parameters among the cameras and the IMU body frame, in case of directly georeferenced systems. The performance of the proposed method is evaluated through experimental results using simulated data. A comparative analysis between the proposed single-step and the two-step, which makes use of the traditional bundle adjustment procedure, is demonstrated.

  18. Pengukuran Jarak Berbasiskan Stereo Vision

    Directory of Open Access Journals (Sweden)

    Iman Herwidiana Kartowisastro

    2010-12-01

    Full Text Available Measuring distance from an object can be conducted in a variety of ways, including by making use of distance measuring sensors such as ultrasonic sensors, or using the approach based vision system. This last mode has advantages in terms of flexibility, namely a monitored object has relatively no restrictions characteristic of the materials used at the same time it also has its own difficulties associated with object orientation and state of the room where the object is located. To overcome this problem, so this study examines the possibility of using stereo vision to measure the distance to an object. The system was developed starting from image extraction, information extraction characteristics of the objects contained in the image and visual distance measurement process with 2 separate cameras placed in a distance of 70 cm. The measurement object can be in the range of 50 cm - 130 cm with a percentage error of 5:53%. Lighting conditions (homogeneity and intensity has a great influence on the accuracy of the measurement results. 

  19. Person detection, tracking and following using stereo camera

    Science.gov (United States)

    Wang, Xiaofeng; Zhang, Lilian; Wang, Duo; Hu, Xiaoping

    2018-04-01

    Person detection, tracking and following is a key enabling technology for mobile robots in many human-robot interaction applications. In this article, we present a system which is composed of visual human detection, video tracking and following. The detection is based on YOLO(You only look once), which applies a single convolution neural network(CNN) to the full image, thus can predict bounding boxes and class probabilities directly in one evaluation. Then the bounding box provides initial person position in image to initialize and train the KCF(Kernelized Correlation Filter), which is a video tracker based on discriminative classifier. At last, by using a stereo 3D sparse reconstruction algorithm, not only the position of the person in the scene is determined, but also it can elegantly solve the problem of scale ambiguity in the video tracker. Extensive experiments are conducted to demonstrate the effectiveness and robustness of our human detection and tracking system.

  20. Meteor Film Recording with Digital Film Cameras with large CMOS Sensors

    Science.gov (United States)

    Slansky, P. C.

    2016-12-01

    In this article the author combines his professional know-how about cameras for film and television production with his amateur astronomy activities. Professional digital film cameras with high sensitivity are still quite rare in astronomy. One reason for this may be their costs of up to 20 000 and more (camera body only). In the interim, however,consumer photo cameras with film mode and very high sensitivity have come to the market for about 2 000 EUR. In addition, ultra-high sensitive professional film cameras, that are very interesting for meteor observation, have been introduced to the market. The particular benefits of digital film cameras with large CMOS sensors, including photo cameras with film recording function, for meteor recording are presented by three examples: a 2014 Camelopardalid, shot with a Canon EOS C 300, an exploding 2014 Aurigid, shot with a Sony alpha7S, and the 2016 Perseids, shot with a Canon ME20F-SH. All three cameras use large CMOS sensors; "large" meaning Super-35 mm, the classic 35 mm film format (24x13.5 mm, similar to APS-C size), or full format (36x24 mm), the classic 135 photo camera format. Comparisons are made to the widely used cameras with small CCD sensors, such as Mintron or Watec; "small" meaning 12" (6.4x4.8 mm) or less. Additionally, special photographic image processing of meteor film recordings is discussed.

  1. Virtual-stereo fringe reflection technique for specular free-form surface testing

    Science.gov (United States)

    Ma, Suodong; Li, Bo

    2016-11-01

    Due to their excellent ability to improve the performance of optical systems, free-form optics have attracted extensive interest in many fields, e.g. optical design of astronomical telescopes, laser beam expanders, spectral imagers, etc. However, compared with traditional simple ones, testing for such kind of optics is usually more complex and difficult which has been being a big barrier for the manufacture and the application of these optics. Fortunately, owing to the rapid development of electronic devices and computer vision technology, fringe reflection technique (FRT) with advantages of simple system structure, high measurement accuracy and large dynamic range is becoming a powerful tool for specular free-form surface testing. In order to obtain absolute surface shape distributions of test objects, two or more cameras are often required in the conventional FRT which makes the system structure more complex and the measurement cost much higher. Furthermore, high precision synchronization between each camera is also a troublesome issue. To overcome the aforementioned drawback, a virtual-stereo FRT for specular free-form surface testing is put forward in this paper. It is able to achieve absolute profiles with the help of only one single biprism and a camera meanwhile avoiding the problems of stereo FRT based on binocular or multi-ocular cameras. Preliminary experimental results demonstrate the feasibility of the proposed technique.

  2. Opportunity's Surroundings on Sol 1798 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11850 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11850 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo 180-degree view of the rover's surroundings during the 1,798th Martian day, or sol, of Opportunity's surface mission (Feb. 13, 2009). North is on top. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 111 meters (364 feet) southward on the preceding sol. Tracks from that drive recede northward in this view. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  3. BRDF invariant stereo using light transport constancy.

    Science.gov (United States)

    Wang, Liang; Yang, Ruigang; Davis, James E

    2007-09-01

    Nearly all existing methods for stereo reconstruction assume that scene reflectance is Lambertian and make use of brightness constancy as a matching invariant. We introduce a new invariant for stereo reconstruction called light transport constancy (LTC), which allows completely arbitrary scene reflectance (bidirectional reflectance distribution functions (BRDFs)). This invariant can be used to formulate a rank constraint on multiview stereo matching when the scene is observed by several lighting configurations in which only the lighting intensity varies. In addition, we show that this multiview constraint can be used with as few as two cameras and two lighting configurations. Unlike previous methods for BRDF invariant stereo, LTC does not require precisely configured or calibrated light sources or calibration objects in the scene. Importantly, the new constraint can be used to provide BRDF invariance to any existing stereo method whenever appropriate lighting variation is available.

  4. Motion camera based on a custom vision sensor and an FPGA architecture

    Science.gov (United States)

    Arias-Estrada, Miguel

    1998-09-01

    A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.

  5. GPU-based real-time trinocular stereo vision

    Science.gov (United States)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  6. Family Of Calibrated Stereometric Cameras For Direct Intraoral Use

    Science.gov (United States)

    Curry, Sean; Moffitt, Francis; Symes, Douglas; Baumrind, Sheldon

    1983-07-01

    In order to study empirically the relative efficiencies of different types of orthodontic appliances in repositioning teeth in vivo, we have designed and constructed a pair of fixed-focus, normal case, fully-calibrated stereometric cameras. One is used to obtain stereo photography of single teeth, at a scale of approximately 2:1, and the other is designed for stereo imaging of the entire dentition, study casts, facial structures, and other related objects at a scale of approximately 1:8. Twin lenses simultaneously expose adjacent frames on a single roll of 70 mm film. Physical flatness of the film is ensured by the use of a spring-loaded metal pressure plate. The film is forced against a 3/16" optical glass plate upon which is etched an array of 16 fiducial marks which divide the film format into 9 rectangular regions. Using this approach, it has been possible to produce photographs which are undistorted for qualitative viewing and from which quantitative data can be acquired by direct digitization of conventional photographic enlargements. We are in the process of designing additional members of this family of cameras. All calibration and data acquisition and analysis techniques previously developed will be directly applicable to these new cameras.

  7. Experiments on mobile robot stereo vision system calibration under hardware imperfection

    Directory of Open Access Journals (Sweden)

    Safin Ramil

    2018-01-01

    Full Text Available Calibration is essential for any robot vision system for achieving high accuracy in deriving objects metric information. One of typical requirements for a stereo vison system in order to obtain better calibration results is to guarantee that both cameras keep the same vertical level. However, cameras may be displaced due to severe conditions of a robot operating or some other circumstances. This paper presents our experimental approach to the problem of a mobile robot stereo vision system calibration under a hardware imperfection. In our experiments, we used crawler-type mobile robot «Servosila Engineer». Stereo system cameras of the robot were displaced relative to each other, causing loss of surrounding environment information. We implemented and verified checkerboard and circle grid based calibration methods. The two methods comparison demonstrated that a circle grid based calibration should be preferred over a classical checkerboard calibration approach.

  8. Adaptive Multi-Sensor Perception for Driving Automation in Outdoor Contexts

    Directory of Open Access Journals (Sweden)

    Annalisa Milella

    2014-08-01

    Full Text Available In this research, adaptive perception for driving automation is discussed so as to enable a vehicle to automatically detect driveable areas and obstacles in the scene. It is especially designed for outdoor contexts where conventional perception systems that rely on a priori knowledge of the terrain's geometric properties, appearance properties, or both, is prone to fail, due to the variability in the terrain properties and environmental conditions. In contrast, the proposed framework uses a self-learning approach to build a model of the ground class that is continuously adjusted online to reflect the latest ground appearance. The system also features high flexibility, as it can work using a single sensor modality or a multi-sensor combination. In the context of this research, different embodiments have been demonstrated using range data coming from either a radar or a stereo camera, and adopting self-supervised strategies where monocular vision is automatically trained by radar or stereo vision. A comprehensive set of experimental results, obtained with different ground vehicles operating in the field, are presented to validate and assess the performance of the system.

  9. Joint depth map and color consistency estimation for stereo images with different illuminations and cameras.

    Science.gov (United States)

    Heo, Yong Seok; Lee, Kyoung Mu; Lee, Sang Uk

    2013-05-01

    Abstract—In this paper, we propose a method that infers both accurate depth maps and color-consistent stereo images for radiometrically varying stereo images. In general, stereo matching and performing color consistency between stereo images are a chicken-and-egg problem since it is not a trivial task to simultaneously achieve both goals. Hence, we have developed an iterative framework in which these two processes can boost each other. First, we transform the input color images to log-chromaticity color space, from which a linear relationship can be established during constructing a joint pdf of transformed left and right color images. From this joint pdf, we can estimate a linear function that relates the corresponding pixels in stereo images. Based on this linear property, we present a new stereo matching cost by combining Mutual Information (MI), SIFT descriptor, and segment-based plane-fitting to robustly find correspondence for stereo image pairs which undergo radiometric variations. Meanwhile, we devise a Stereo Color Histogram Equalization (SCHE) method to produce color-consistent stereo image pairs, which conversely boost the disparity map estimation. Experimental results show that our method produces both accurate depth maps and color-consistent stereo images, even for stereo images with severe radiometric differences.

  10. Spirit Beside 'Home Plate,' Sol 1809 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11803 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11803 NASA Mars Exploration Rover Spirit used its navigation camera to take the images assembled into this stereo, 120-degree view southward after a short drive during the 1,809th Martian day, or sol, of Spirit's mission on the surface of Mars (February 3, 2009). By combining images from the left-eye and right-eye sides of the navigation camera, the view appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Spirit had driven about 2.6 meters (8.5 feet) that sol, continuing a clockwise route around a low plateau called 'Home Plate.' In this image, the rocks visible above the rovers' solar panels are on the slope at the northern edge of Home Plate. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  11. New Record Five-Wheel Drive, Spirit's Sol 1856 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11962 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11962 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,856th Martian day, or sol, of Spirit's surface mission (March 23, 2009). The center of the view is toward the west-southwest. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 25.82 meters (84.7 feet) west-northwestward earlier on Sol 1856. This is the longest drive on Mars so far by a rover using only five wheels. Spirit lost the use of its right-front wheel in March 2006. Before Sol 1856, the farthest Spirit had covered in a single sol's five-wheel drive was 24.83 meters (81.5 feet), on Sol 1363 (Nov. 3, 2007). The Sol 1856 drive made progress on a route planned for taking Spirit around the western side of the low plateau called 'Home Plate.' A portion of the northwestern edge of Home Plate is prominent in the left quarter of this image, toward the south. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  12. Approximations to camera sensor noise

    Science.gov (United States)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  13. SNAPSHOT SPECTRAL AND COLOR IMAGING USING A REGULAR DIGITAL CAMERA WITH A MONOCHROMATIC IMAGE SENSOR

    Directory of Open Access Journals (Sweden)

    J. Hauser

    2017-10-01

    Full Text Available Spectral imaging (SI refers to the acquisition of the three-dimensional (3D spectral cube of spatial and spectral data of a source object at a limited number of wavelengths in a given wavelength range. Snapshot spectral imaging (SSI refers to the instantaneous acquisition (in a single shot of the spectral cube, a process suitable for fast changing objects. Known SSI devices exhibit large total track length (TTL, weight and production costs and relatively low optical throughput. We present a simple SSI camera based on a regular digital camera with (i an added diffusing and dispersing phase-only static optical element at the entrance pupil (diffuser and (ii tailored compressed sensing (CS methods for digital processing of the diffused and dispersed (DD image recorded on the image sensor. The diffuser is designed to mix the spectral cube data spectrally and spatially and thus to enable convergence in its reconstruction by CS-based algorithms. In addition to performing SSI, this SSI camera is capable to perform color imaging using a monochromatic or gray-scale image sensor without color filter arrays.

  14. Approaches for Stereo Matching

    Directory of Open Access Journals (Sweden)

    Takouhi Ozanian

    1995-04-01

    Full Text Available This review focuses on the last decade's development of the computational stereopsis for recovering three-dimensional information. The main components of the stereo analysis are exposed: image acquisition and camera modeling, feature selection, feature matching and disparity interpretation. A brief survey is given of the well known feature selection approaches and the estimation parameters for this selection are mentioned. The difficulties in identifying correspondent locations in the two images are explained. Methods as to how effectively to constrain the search for correct solution of the correspondence problem are discussed, as are strategies for the whole matching process. Reasons for the occurrence of matching errors are considered. Some recently proposed approaches, employing new ideas in the modeling of stereo matching in terms of energy minimization, are described. Acknowledging the importance of computation time for real-time applications, special attention is paid to parallelism as a way to achieve the required level of performance. The development of trinocular stereo analysis as an alternative to the conventional binocular one, is described. Finally a classification based on the test images for verification of the stereo matching algorithms, is supplied.

  15. BIFOCAL STEREO FOR MULTIPATH PERSON RE-IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    G. Blott

    2017-11-01

    Full Text Available This work presents an approach for the task of person re-identification by exploiting bifocal stereo cameras. Present monocular person re-identification approaches show a decreasing working distance, when increasing the image resolution to obtain a higher reidentification performance. We propose a novel 3D multipath bifocal approach, containing a rectilinear lens with larger focal length for long range distances and a fish eye lens of a smaller focal length for the near range. The person re-identification performance is at least on par with 2D re-identification approaches but the working distance of the approach is increased and on average 10% more re-identification performance can be achieved in the overlapping field of view compared to a single camera. In addition, the 3D information is exploited from the overlapping field of view to solve potential 2D ambiguities.

  16. SU-F-J-140: Using Handheld Stereo Depth Cameras to Extend Medical Imaging for Radiation Therapy Planning

    Energy Technology Data Exchange (ETDEWEB)

    Jenkins, C; Xing, L; Yu, S [Stanford University, Stanford, CA (United States)

    2016-06-15

    Purpose: A correct body contour is essential for the accuracy of dose calculation in radiation therapy. While modern medical imaging technologies provide highly accurate representations of body contours, there are times when a patient’s anatomy cannot be fully captured or there is a lack of easy access to CT/MRI scanning. Recently, handheld cameras have emerged that are capable of performing three dimensional (3D) scans of patient surface anatomy. By combining 3D camera and medical imaging data, the patient’s surface contour can be fully captured. Methods: A proof-of-concept system matches a patient surface model, created using a handheld stereo depth camera (DC), to the available areas of a body contour segmented from a CT scan. The matched surface contour is then converted to a DICOM structure and added to the CT dataset to provide additional contour information. In order to evaluate the system, a 3D model of a patient was created by segmenting the body contour with a treatment planning system (TPS) and fabricated with a 3D printer. A DC and associated software were used to create a 3D scan of the printed phantom. The surface created by the camera was then registered to a CT model that had been cropped to simulate missing scan data. The aligned surface was then imported into the TPS and compared with the originally segmented contour. Results: The RMS error for the alignment between the camera and cropped CT models was 2.26 mm. Mean distance between the aligned camera surface and ground truth model was −1.23 +/−2.47 mm. Maximum deviations were < 1 cm and occurred in areas of high concavity or where anatomy was close to the couch. Conclusion: The proof-of-concept study shows an accurate, easy and affordable method to extend medical imaging for radiation therapy planning using 3D cameras without additional radiation. Intel provided the camera hardware used in this study.

  17. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.

    Science.gov (United States)

    Ci, Wenyan; Huang, Yingping

    2016-10-17

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.

  18. Single-photon sensitive fast ebCMOS camera system for multiple-target tracking of single fluorophores: application to nano-biophotonics

    Science.gov (United States)

    Cajgfinger, Thomas; Chabanat, Eric; Dominjon, Agnes; Doan, Quang T.; Guerin, Cyrille; Houles, Julien; Barbier, Remi

    2011-03-01

    Nano-biophotonics applications will benefit from new fluorescent microscopy methods based essentially on super-resolution techniques (beyond the diffraction limit) on large biological structures (membranes) with fast frame rate (1000 Hz). This trend tends to push the photon detectors to the single-photon counting regime and the camera acquisition system to real time dynamic multiple-target tracing. The LUSIPHER prototype presented in this paper aims to give a different approach than those of Electron Multiplied CCD (EMCCD) technology and try to answer to the stringent demands of the new nano-biophotonics imaging techniques. The electron bombarded CMOS (ebCMOS) device has the potential to respond to this challenge, thanks to the linear gain of the accelerating high voltage of the photo-cathode, to the possible ultra fast frame rate of CMOS sensors and to the single-photon sensitivity. We produced a camera system based on a 640 kPixels ebCMOS with its acquisition system. The proof of concept for single-photon based tracking for multiple single-emitters is the main result of this paper.

  19. Indoor calibration for stereoscopic camera STC: a new method

    Science.gov (United States)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2017-11-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir

  20. Stereo reconstruction from multiperspective panoramas.

    Science.gov (United States)

    Li, Yin; Shum, Heung-Yeung; Tang, Chi-Keung; Szeliski, Richard

    2004-01-01

    A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular perspective images to produce a set of multiperspective panoramas and then compute depth maps directly from these resampled panoramas. Our panoramas sample uniformly in three dimensions: rotation angle, inverse radial distance, and vertical elevation. The use of multiperspective panoramas eliminates the limited overlap present in the original input images and, thus, problems as in conventional multibaseline stereo can be avoided. Our approach differs from stereo matching of single-perspective panoramic images taken from different locations, where the epipolar constraints are sine curves. For our multiperspective panoramas, the epipolar geometry, to the first order approximation, consists of horizontal lines. Therefore, any traditional stereo algorithm can be applied to multiperspective panoramas with little modification. In this paper, we describe two reconstruction algorithms. The first is a cylinder sweep algorithm that uses a small number of resampled multiperspective panoramas to obtain dense 3D reconstruction. The second algorithm, in contrast, uses a large number of multiperspective panoramas and takes advantage of the approximate horizontal epipolar geometry inherent in multiperspective panoramas. It comprises a novel and efficient 1D multibaseline matching technique, followed by tensor voting to extract the depth surface. Experiments show that our algorithms are capable of producing comparable high quality depth maps which can be used for applications such as view interpolation.

  1. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    Full Text Available Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first

  2. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    Science.gov (United States)

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  3. Three-dimensional stereo by photometric ratios

    International Nuclear Information System (INIS)

    Wolff, L.B.; Angelopoulou, E.

    1994-01-01

    We present a methodology for corresponding a dense set of points on an object surface from photometric values for three-dimensional stereo computation of depth. The methodology utilizes multiple stereo pairs of images, with each stereo pair being taken of the identical scene but under different illumination. With just two stereo pairs of images taken under two different illumination conditions, a stereo pair of ratio images can be produced, one for the ratio of left-hand images and one for the ratio of right-hand images. We demonstrate how the photometric ratios composing these images can be used for accurate correspondence of object points. Object points having the same photometric ratio with respect to two different illumination conditions constitute a well-defined equivalence class of physical constraints defined by local surface orientation relative to illumination conditions. We formally show that for diffuse reflection the photometric ratio is invariant to varying camera characteristics, surface albedo, and viewpoint and that therefore the same photometric ratio in both images of a stereo pair implies the same equivalence class of physical constraints. The correspondence of photometric ratios along epipolar lines in a stereo pair of images under different illumination conditions is a correspondence of equivalent physical constraints, and the determination of depth from stereo can be performed. Whereas illumination planning is required, our photometric-based stereo methodology does not require knowledge of illumination conditions in the actual computation of three-dimensional depth and is applicable to perspective views. This technique extends the stereo determination of three-dimensional depth to smooth featureless surfaces without the use of precisely calibrated lighting. We demonstrate experimental depth maps from a dense set of points on smooth objects of known ground-truth shape, determined to within 1% depth accuracy

  4. Inertial Sensor Self-Calibration in a Visually-Aided Navigation Approach for a Micro-AUV

    Directory of Open Access Journals (Sweden)

    Francisco Bonin-Font

    2015-01-01

    Full Text Available This paper presents a new solution for underwater observation, image recording, mapping and 3D reconstruction in shallow waters. The platform, designed as a research and testing tool, is based on a small underwater robot equipped with a MEMS-based IMU, two stereo cameras and a pressure sensor. The data given by the sensors are fused, adjusted and corrected in a multiplicative error state Kalman filter (MESKF, which returns a single vector with the pose and twist of the vehicle and the biases of the inertial sensors (the accelerometer and the gyroscope. The inclusion of these biases in the state vector permits their self-calibration and stabilization, improving the estimates of the robot orientation. Experiments in controlled underwater scenarios and in the sea have demonstrated a satisfactory performance and the capacity of the vehicle to operate in real environments and in real time.

  5. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  6. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap

    Directory of Open Access Journals (Sweden)

    Khalil M. Ahmad Yousef

    2017-10-01

    Full Text Available Extrinsic calibration of a camera and a 2D laser range finder (lidar sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot–world hand–eye calibration (RWHE problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.

  7. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap.

    Science.gov (United States)

    Ahmad Yousef, Khalil M; Mohd, Bassam J; Al-Widyan, Khalid; Hayajneh, Thaier

    2017-10-14

    Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot-world hand-eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.

  8. CMOS detectors: lessons learned during the STC stereo channel preflight calibration

    Science.gov (United States)

    Simioni, E.; De Sio, A.; Da Deppo, V.; Naletto, G.; Cremonese, G.

    2017-09-01

    The Stereo Camera (STC), mounted on-board the BepiColombo spacecraft, will acquire in push frame stereo mode the entire surface of Mercury. STC will provide the images for the global three-dimensional reconstruction of the surface of the innermost planet of the Solar System. The launch of BepiColombo is foreseen in 2018. STC has an innovative optical system configuration, which allows good optical performances with a mass and volume reduction of a factor two with respect to classical stereo camera approach. In such a telescope, two different optical paths inclined of +/-20°, with respect to the nadir direction, are merged together in a unique off axis path and focused on a single detector. The focal plane is equipped with a 2k x 2k hybrid Si-PIN detector, based on CMOS technology, combining low read-out noise, high radiation hardness, compactness, lack of parasitic light, capability of snapshot image acquisition and short exposure times (less than 1 ms) and small pixel size (10 μm). During the preflight calibration campaign of STC, some detector spurious effects have been noticed. Analyzing the images taken during the calibration phase, two different signals affecting the background level have been measured. These signals can reduce the detector dynamics down to a factor of 1/4th and they are not due to dark current, stray light or similar effects. In this work we will describe all the features of these unwilled effects, and the calibration procedures we developed to analyze them.

  9. Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor

    Science.gov (United States)

    Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.

  10. Camera Control and Geo-Registration for Video Sensor Networks

    Science.gov (United States)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  11. Automatic multi-camera calibration for deployable positioning systems

    Science.gov (United States)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  12. A Novel Approach to Calibrating Multifunctional Binocular Stereovision Sensor

    International Nuclear Information System (INIS)

    Xue, T; Zhu, J G; Wu, B; Ye, S H

    2006-01-01

    We present a novel multifunctional binocular stereovision sensor for various threedimensional (3D) inspection tasks. It not only avoids the so-called correspondence problem of passive stereo vision, but also possesses the uniform mathematical model. We also propose a novel approach to estimating all the sensor parameters with free-position planar reference object. In this technique, the planar pattern can be moved freely by hand. All the camera intrinsic and extrinsic parameters with coefficient of lens radial and tangential distortion are estimated, and sensor parameters are calibrated based on the 3D measurement model and optimized with the feature point constraint algorithm using the same views in the camera calibration stage. The proposed approach greatly reduces the cost of the calibration equipment, and it is flexible and practical for the vision measurement. It shows that this method has high precision by experiment, and the sensor measured relative error of space length excels 0.3%

  13. LightDenseYOLO: A Fast and Accurate Marker Tracker for Autonomous UAV Landing by Visible Light Camera Sensor on Drone.

    Science.gov (United States)

    Nguyen, Phong Ha; Arsalan, Muhammad; Koo, Ja Hyung; Naqvi, Rizwan Ali; Truong, Noi Quang; Park, Kang Ryoung

    2018-05-24

    Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker's location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time.

  14. Stereo vision with distance and gradient recognition

    Science.gov (United States)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  15. Secure Chaotic Map Based Block Cryptosystem with Application to Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Muhammad Khurram Khan

    2011-01-01

    Full Text Available Recently, Wang et al. presented an efficient logistic map based block encryption system. The encryption system employs feedback ciphertext to achieve plaintext dependence of sub-keys. Unfortunately, we discovered that their scheme is unable to withstand key stream attack. To improve its security, this paper proposes a novel chaotic map based block cryptosystem. At the same time, a secure architecture for camera sensor network is constructed. The network comprises a set of inexpensive camera sensors to capture the images, a sink node equipped with sufficient computation and storage capabilities and a data processing server. The transmission security between the sink node and the server is gained by utilizing the improved cipher. Both theoretical analysis and simulation results indicate that the improved algorithm can overcome the flaws and maintain all the merits of the original cryptosystem. In addition, computational costs and efficiency of the proposed scheme are encouraging for the practical implementation in the real environment as well as camera sensor network.

  16. D Modelling of AN Indoor Space Using a Rotating Stereo Frame Camera System

    Science.gov (United States)

    Kang, J.; Lee, I.

    2016-06-01

    Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.

  17. Automatic Quadcopter Control Avoiding Obstacle Using Camera with Integrated Ultrasonic Sensor

    Science.gov (United States)

    Anis, Hanafi; Haris Indra Fadhillah, Ahmad; Darma, Surya; Soekirno, Santoso

    2018-04-01

    Automatic navigation on the drone is being developed these days, a wide variety of types of drones and its automatic functions. Drones used in this study was an aircraft with four propellers or quadcopter. In this experiment, image processing used to recognize the position of an object and ultrasonic sensor used to detect obstacle distance. The method used to trace an obsctacle in image processing was the Lucas-Kanade-Tomasi Tracker, which had been widely used due to its high accuracy. Ultrasonic sensor used to complement the image processing success rate to be fully detected object. The obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors. Visual feedback control based PID controllers are used as a control of drones movement. The conclusion of the obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors.

  18. Camera sensor arrangement for crop/weed detection accuracy in agronomic images.

    Science.gov (United States)

    Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo

    2013-04-02

    In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.

  19. The MVACS Robotic Arm Camera

    Science.gov (United States)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  20. Building a Stereo-angle into strip-sensors for the ATLAS-Upgrade Inner-Tracker Endcaps

    CERN Document Server

    Hessey, NP; The ATLAS collaboration

    2013-01-01

    The Strips Endcap detector for the ATLAS Upgrade needs several sensor shapes, each of which is approximately a wedge shape like the current SCT. For the Endcap to use a stave-like approach as proposed for the barrel, care is needed to design the shapes to avoid clashes and minimise gaps between them. This note gives the basic formulae for one way of building up a petal. It allows for a stereo-angle to be built into the wafer, and takes into account the maximum usable wafer size.

  1. Opportunity's Surroundings on Sol 1818 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11846 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11846 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,818th Martian day, or sol, of Opportunity's surface mission (March 5, 2009). South is at the center; north at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 80.3 meters (263 feet) southward earlier on that sol. Tracks from the drive recede northward in this view. The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  2. Development of a single-photon-counting camera with use of a triple-stacked micro-channel plate.

    Science.gov (United States)

    Yasuda, Naruomi; Suzuki, Hitoshi; Katafuchi, Tetsuro

    2016-01-01

    At the quantum-mechanical level, all substances (not merely electromagnetic waves such as light and X-rays) exhibit wave–particle duality. Whereas students of radiation science can easily understand the wave nature of electromagnetic waves, the particle (photon) nature may elude them. Therefore, to assist students in understanding the wave–particle duality of electromagnetic waves, we have developed a photon-counting camera that captures single photons in two-dimensional images. As an image intensifier, this camera has a triple-stacked micro-channel plate (MCP) with an amplification factor of 10(6). The ultra-low light of a single photon entering the camera is first converted to an electron through the photoelectric effect on the photocathode. The electron is intensified by the triple-stacked MCP and then converted to a visible light distribution, which is measured by a high-sensitivity complementary metal oxide semiconductor image sensor. Because it detects individual photons, the photon-counting camera is expected to provide students with a complete understanding of the particle nature of electromagnetic waves. Moreover, it measures ultra-weak light that cannot be detected by ordinary low-sensitivity cameras. Therefore, it is suitable for experimental research on scintillator luminescence, biophoton detection, and similar topics.

  3. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    Science.gov (United States)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  4. On-line measurement of ski-jumper trajectory: combining stereo vision and shape description

    Science.gov (United States)

    Nunner, T.; Sidla, O.; Paar, G.; Nauschnegg, B.

    2010-01-01

    Ski jumping has continuously raised major public interest since the early 70s of the last century, mainly in Europe and Japan. The sport undergoes high-level analysis and development, among others, based on biodynamic measurements during the take-off and flight phase of the jumper. We report on a vision-based solution for such measurements that provides a full 3D trajectory of unique points on the jumper's shape. During the jump synchronized stereo images are taken by a calibrated camera system in video rate. Using methods stemming from video surveillance, the jumper is detected and localized in the individual stereo images, and learning-based deformable shape analysis identifies the jumper's silhouette. The 3D reconstruction of the trajectory takes place on standard stereo forward intersection of distinct shape points, such as helmet top or heel. In the reported study, the measurements are being verified by an independent GPS measurement mounted on top of the Jumper's helmet, synchronized to the timing of camera exposures. Preliminary estimations report an accuracy of +/-20 cm in 30 Hz imaging frequency within 40m trajectory. The system is ready for fully-automatic on-line application on ski-jumping sites that allow stereo camera views with an approximate base-distance ratio of 1:3 within the entire area of investigation.

  5. Advanced scanning transmission stereo electron microscopy of structural and functional engineering materials

    International Nuclear Information System (INIS)

    Agudo Jácome, L.; Eggeler, G.; Dlouhý, A.

    2012-01-01

    Stereo transmission electron microscopy (TEM) provides a 3D impression of the microstructure in a thin TEM foil. It allows to perform depth and TEM foil thickness measurements and to decide whether a microstructural feature lies inside of a thin foil or on its surface. It allows appreciating the true three-dimensional nature of dislocation configurations. In the present study we first review some basic elements of classical stereo TEM. We then show how the method can be extended by working in the scanning transmission electron microscope (STEM) mode of a modern analytical 200 kV TEM equipped with a field emission gun (FEG TEM) and a high angle annular dark field (HAADF) detector. We combine two micrographs of a stereo pair into one anaglyph. When viewed with special colored glasses the anaglyph provides a direct and realistic 3D impression of the microstructure. Three examples are provided which demonstrate the potential of this extended stereo TEM technique: a single crystal Ni-base superalloy, a 9% Chromium tempered martensite ferritic steel and a NiTi shape memory alloy. We consider the effect of camera length, show how foil thicknesses can be measured, and discuss the depth of focus and surface effects. -- Highlights: ► The advanced STEM/HAADF diffraction contrast is extended to 3D stereo-imaging. ► The advantages of the new technique over stereo-imaging in CTEM are demonstrated. ► The new method allows foil thickness measurements in a broad range of conditions. ► We show that features associated with ion milling surface damage can be beneficial for appreciating 3D features of the microstructure.

  6. Advanced scanning transmission stereo electron microscopy of structural and functional engineering materials

    Energy Technology Data Exchange (ETDEWEB)

    Agudo Jacome, L., E-mail: leonardo.agudo@bam.de [Institut fuer Werkstoffe, Ruhr-Universitaet Bochum, D-44780 Bochum (Germany); Eggeler, G., E-mail: gunther.eggeler@ruhr-uni-bochum.de [Institut fuer Werkstoffe, Ruhr-Universitaet Bochum, D-44780 Bochum (Germany); Dlouhy, A., E-mail: dlouhy@ipm.cz [Institute of Physics of Materials, Academy of Sciences of the Czech Republic, Zizkova 22, 616 62 Brno (Czech Republic)

    2012-11-15

    Stereo transmission electron microscopy (TEM) provides a 3D impression of the microstructure in a thin TEM foil. It allows to perform depth and TEM foil thickness measurements and to decide whether a microstructural feature lies inside of a thin foil or on its surface. It allows appreciating the true three-dimensional nature of dislocation configurations. In the present study we first review some basic elements of classical stereo TEM. We then show how the method can be extended by working in the scanning transmission electron microscope (STEM) mode of a modern analytical 200 kV TEM equipped with a field emission gun (FEG TEM) and a high angle annular dark field (HAADF) detector. We combine two micrographs of a stereo pair into one anaglyph. When viewed with special colored glasses the anaglyph provides a direct and realistic 3D impression of the microstructure. Three examples are provided which demonstrate the potential of this extended stereo TEM technique: a single crystal Ni-base superalloy, a 9% Chromium tempered martensite ferritic steel and a NiTi shape memory alloy. We consider the effect of camera length, show how foil thicknesses can be measured, and discuss the depth of focus and surface effects. -- Highlights: Black-Right-Pointing-Pointer The advanced STEM/HAADF diffraction contrast is extended to 3D stereo-imaging. Black-Right-Pointing-Pointer The advantages of the new technique over stereo-imaging in CTEM are demonstrated. Black-Right-Pointing-Pointer The new method allows foil thickness measurements in a broad range of conditions. Black-Right-Pointing-Pointer We show that features associated with ion milling surface damage can be beneficial for appreciating 3D features of the microstructure.

  7. Simulation-Based Optimization of Camera Placement in the Context of Industrial Pose Estimation

    DEFF Research Database (Denmark)

    Jørgensen, Troels Bo; Iversen, Thorbjørn Mosekjær; Lindvig, Anders Prier

    2018-01-01

    In this paper, we optimize the placement of a camera in simulation in order to achieve a high success rate for a pose estimation problem. This is achieved by simulating 2D images from a stereo camera in a virtual scene. The stereo images are then used to generate 3D point clouds based on two diff...

  8. LightDenseYOLO: A Fast and Accurate Marker Tracker for Autonomous UAV Landing by Visible Light Camera Sensor on Drone

    Directory of Open Access Journals (Sweden)

    Phong Ha Nguyen

    2018-05-01

    Full Text Available Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker’s location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time.

  9. Time-of-flight camera via a single-pixel correlation image sensor

    Science.gov (United States)

    Mao, Tianyi; Chen, Qian; He, Weiji; Dai, Huidong; Ye, Ling; Gu, Guohua

    2018-04-01

    A time-of-flight imager based on single-pixel correlation image sensors is proposed for noise-free depth map acquisition in presence of ambient light. Digital micro-mirror device and time-modulated IR-laser provide spatial and temporal illumination on the unknown object. Compressed sensing and ‘four bucket principle’ method are combined to reconstruct the depth map from a sequence of measurements at a low sampling rate. Second-order correlation transform is also introduced to reduce the noise from the detector itself and direct ambient light. Computer simulations are presented to validate the computational models and improvement of reconstructions.

  10. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    Directory of Open Access Journals (Sweden)

    Chien-Lun Hou

    2011-02-01

    Full Text Available In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  11. Camera-marker and inertial sensor fusion for improved motion tracking

    NARCIS (Netherlands)

    Roetenberg, D.; Veltink, P.H.

    2005-01-01

    A method for combining a camera-marker based motion analysis system with miniature inertial sensors is proposed. It is used to fill gaps of optical data and can increase the data rate of the optical system.

  12. Operational modal analysis on a VAWT in a large wind tunnel using stereo vision technique

    DEFF Research Database (Denmark)

    Najafi, Nadia; Schmidt Paulsen, Uwe

    2017-01-01

    This paper is about development and use of a research based stereo vision system for vibration and operational modal analysis on a parked, 1-kW, 3-bladed vertical axis wind turbine (VAWT), tested in a wind tunnel at high wind. Vibrations were explored experimentally by tracking small deflections...... of the markers on the structure with two cameras, and also numerically, to study structural vibrations in an overall objective to investigate challenges and to prove the capability of using stereo vision. Two high speed cameras provided displacement measurements at no wind speed interference. The displacement...

  13. A detailed comparison of single-camera light-field PIV and tomographic PIV

    Science.gov (United States)

    Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.

    2018-03-01

    This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.

  14. Fusing inertial sensor data in an extended Kalman filter for 3D camera tracking.

    Science.gov (United States)

    Erdem, Arif Tanju; Ercan, Ali Özer

    2015-02-01

    In a setup where camera measurements are used to estimate 3D egomotion in an extended Kalman filter (EKF) framework, it is well-known that inertial sensors (i.e., accelerometers and gyroscopes) are especially useful when the camera undergoes fast motion. Inertial sensor data can be fused at the EKF with the camera measurements in either the correction stage (as measurement inputs) or the prediction stage (as control inputs). In general, only one type of inertial sensor is employed in the EKF in the literature, or when both are employed they are both fused in the same stage. In this paper, we provide an extensive performance comparison of every possible combination of fusing accelerometer and gyroscope data as control or measurement inputs using the same data set collected at different motion speeds. In particular, we compare the performances of different approaches based on 3D pose errors, in addition to camera reprojection errors commonly found in the literature, which provides further insight into the strengths and weaknesses of different approaches. We show using both simulated and real data that it is always better to fuse both sensors in the measurement stage and that in particular, accelerometer helps more with the 3D position tracking accuracy, whereas gyroscope helps more with the 3D orientation tracking accuracy. We also propose a simulated data generation method, which is beneficial for the design and validation of tracking algorithms involving both camera and inertial measurement unit measurements in general.

  15. ROV seafloor surveys combining 5-cm lateral resolution multibeam bathymetry with color stereo photographic imagery

    Science.gov (United States)

    Caress, D. W.; Hobson, B.; Thomas, H. J.; Henthorn, R.; Martin, E. J.; Bird, L.; Rock, S. M.; Risi, M.; Padial, J. A.

    2013-12-01

    The Monterey Bay Aquarium Research Institute is developing a low altitude, high-resolution seafloor mapping capability that combines multibeam sonar with stereo photographic imagery. The goal is to obtain spatially quantitative, repeatable renderings of the seafloor with fidelity at scales of 5 cm or better from altitudes of 2-3 m. The initial test surveys using this sensor system are being conducted from a remotely operated vehicle (ROV). Ultimately we intend to field this survey system from an autonomous underwater vehicle (AUV). This presentation focuses on the current sensor configuration, methods for data processing, and results from recent test surveys. Bathymetry data are collected using a 400-kHz Reson 7125 multibeam sonar. This configuration produces 512 beams across a 135° wide swath; each beam has a 0.5° acrosstrack by 1.0° alongtrack angular width. At a 2-m altitude, the nadir beams have a 1.7-cm acrosstrack and 3.5 cm alongtrack footprint. Dual Allied Vision Technology GX1920 2.8 Mpixel color cameras provide color stereo photography of the seafloor. The camera housings have been fitted with corrective optics achieving a 90° field of view through a dome port. Illumination is provided by dual 100J xenon strobes. Position, depth, and attitude data are provided by a Kearfott SeaDevil Inertial Navigation System (INS) integrated with a 300 kHz RDI Doppler velocity log (DVL). A separate Paroscientific pressure sensor is mounted adjacent to the INS. The INS Kalman filter is aided by the DVL velocity and pressure data, achieving navigational drift rates less than 0.05% of the distance traveled during surveys. The sensors are mounted onto a toolsled fitted below MBARI's ROV Doc Ricketts with the sonars, cameras and strobes all pointed vertically down. During surveys the ROV flies at a 2-m altitude at speeds of 0.1-0.2 m/s. During a four-day R/V Western Flyer cruise in June 2013, we successfully collected multibeam and camera survey data from a 2-m altitude

  16. Dust Devil in Spirit's View Ahead on Sol 1854 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11960 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11960 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,854th Martian day, or sol, of Spirit's surface mission (March 21, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 13.79 meters (45 feet) westward earlier on Sol 1854. West is at the center, where a dust devil is visible in the distance. North on the right, where Husband Hill dominates the horizon; Spirit was on top of Husband Hill in September and October 2005. South is on the left, where lighter-toned rock lines the edge of the low plateau called 'Home Plate.' This view is presented as a cylindrical-perspective projection with geometric seam correction.

  17. Opportunity's Surroundings on Sol 1687 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11739 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11739 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, 360-degree view of the rover's surroundings on the 1,687th Martian day, or sol, of its surface mission (Oct. 22, 2008). The view appears three-dimensional when viewed through red-blue glasses. Opportunity had driven 133 meters (436 feet) that sol, crossing sand ripples up to about 10 centimeters (4 inches) tall. The tracks visible in the foreground are in the east-northeast direction. Opportunity's position on Sol 1687 was about 300 meters southwest of Victoria Crater. The rover was beginning a long trek toward a much larger crater, Endeavour, about 12 kilometers (7 miles) to the southeast. This panorama combines right-eye and left-eye views presented as cylindrical-perspective projections with geometric seam correction.

  18. View Ahead After Spirit's Sol 1861 Drive (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11977 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11977 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images combined into this stereo, 210-degree view of the rover's surroundings during the 1,861st to 1,863rd Martian days, or sols, of Spirit's surface mission (March 28 to 30, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The center of the scene is toward the south-southwest. East is on the left. West-northwest is on the right. The rover had driven 22.7 meters (74 feet) southwestward on Sol 1861 before beginning to take the frames in this view. The drive brought Spirit past the northwestern corner of Home Plate. In this view, the western edge of Home Plate is on the portion of the horizon farthest to the left. A mound in middle distance near the center of the view is called 'Tsiolkovsky' and is about 40 meters (about 130 feet) from the rover's position. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  19. Rapid matching of stereo vision based on fringe projection profilometry

    Science.gov (United States)

    Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei

    2016-09-01

    As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.

  20. Multi-Sensor Mud Detection

    Science.gov (United States)

    Rankin, Arturo L.; Matthies, Larry H.

    2010-01-01

    Robust mud detection is a critical perception requirement for Unmanned Ground Vehicle (UGV) autonomous offroad navigation. A military UGV stuck in a mud body during a mission may have to be sacrificed or rescued, both of which are unattractive options. There are several characteristics of mud that may be detectable with appropriate UGV-mounted sensors. For example, mud only occurs on the ground surface, is cooler than surrounding dry soil during the daytime under nominal weather conditions, is generally darker than surrounding dry soil in visible imagery, and is highly polarized. However, none of these cues are definitive on their own. Dry soil also occurs on the ground surface, shadows, snow, ice, and water can also be cooler than surrounding dry soil, shadows are also darker than surrounding dry soil in visible imagery, and cars, water, and some vegetation are also highly polarized. Shadows, snow, ice, water, cars, and vegetation can all be disambiguated from mud by using a suite of sensors that span multiple bands in the electromagnetic spectrum. Because there are military operations when it is imperative for UGV's to operate without emitting strong, detectable electromagnetic signals, passive sensors are desirable. JPL has developed a daytime mud detection capability using multiple passive imaging sensors. Cues for mud from multiple passive imaging sensors are fused into a single mud detection image using a rule base, and the resultant mud detection is localized in a terrain map using range data generated from a stereo pair of color cameras.

  1. A Reaction-Diffusion-Based Coding Rate Control Mechanism for Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Naoki Wakamiya

    2010-08-01

    Full Text Available A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  2. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    Science.gov (United States)

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  3. 3D MODELLING OF AN INDOOR SPACE USING A ROTATING STEREO FRAME CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    J. Kang

    2016-06-01

    Full Text Available Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.

  4. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    Science.gov (United States)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and

  5. Pancam Peek into 'Victoria Crater' (Stereo)

    Science.gov (United States)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA08776 [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA08776 A drive of about 60 meters (about 200 feet) on the 943rd Martian day, or sol, of Opportunity's exploration of Mars' Meridiani Planum region (Sept. 18, 2006) brought the NASA rover to within about 50 meters (about 160 feet) of the rim of 'Victoria Crater.' This crater has been the mission's long-term destination for the past 21 Earth months. Opportunity reached a location from which the cameras on top of the rover's mast could begin to see into the interior of Victoria. This stereo anaglyph was made from frames taken on sol 943 by the panoramic camera (Pancam) to offer a three-dimensional view when seen through red-blue glasses. It shows the upper portion of interior crater walls facing toward Opportunity from up to about 850 meters (half a mile) away. The amount of vertical relief visible at the top of the interior walls from this angle is about 15 meters (about 50 feet). The exposures were taken through a Pancam filter selecting wavelengths centered on 750 nanometers. Victoria Crater is about five times wider than 'Endurance Crater,' which Opportunity spent six months examining in 2004, and about 40 times wider than 'Eagle Crater,' where Opportunity first landed. The great lure of Victoria is the expectation that a thick stack of geological layers will be exposed in the crater walls, potentially several times the thickness that was previously studied at Endurance and therefore, potentially preserving several times the historical record.

  6. Wind-Sculpted Vicinity After Opportunity's Sol 1797 Drive (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11820 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11820 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 111 meters (364 feet) on the 1,797th Martian day, or sol, of Opportunity's surface mission (Feb. 12, 2009). North is at the center; south at both ends. This view is the right-eye member of a stereo pair presented as a cylindrical-perspective projection with geometric seam correction. Tracks from the drive recede northward across dark-toned sand ripples in the Meridiani Planum region of Mars. Patches of lighter-toned bedrock are visible on the left and right sides of the image. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). This view is presented as a cylindrical-perspective projection with geometric seam correction.

  7. Opportunity's Surroundings After Sol 1820 Drive (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11841 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11841 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,820th to 1,822nd Martian days, or sols, of Opportunity's surface mission (March 7 to 9, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 20.6 meters toward the northwest on Sol 1820 before beginning to take the frames in this view. Tracks from that drive recede southwestward. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and small exposures of lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  8. Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm

    Science.gov (United States)

    Lahamy, H.; Lichti, D.

    2011-09-01

    Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.

  9. Surrounding Moving Obstacle Detection for Autonomous Driving Using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Hao Sun

    2013-06-01

    Full Text Available Detection and tracking surrounding moving obstacles such as vehicles and pedestrians are crucial for the safety of mobile robotics and autonomous vehicles. This is especially the case in urban driving scenarios. This paper presents a novel framework for surrounding moving obstacles detection using binocular stereo vision. The contributions of our work are threefold. Firstly, a multiview feature matching scheme is presented for simultaneous stereo correspondence and motion correspondence searching. Secondly, the multiview geometry constraint derived from the relative camera positions in pairs of consecutive stereo views is exploited for surrounding moving obstacles detection. Thirdly, an adaptive particle filter is proposed for tracking of multiple moving obstacles in surrounding areas. Experimental results from real-world driving sequences demonstrate the effectiveness and robustness of the proposed framework.

  10. Researches on hazard avoidance cameras calibration of Lunar Rover

    Science.gov (United States)

    Li, Chunyan; Wang, Li; Lu, Xin; Chen, Jihua; Fan, Shenghong

    2017-11-01

    Lunar Lander and Rover of China will be launched in 2013. It will finish the mission targets of lunar soft landing and patrol exploration. Lunar Rover has forward facing stereo camera pair (Hazcams) for hazard avoidance. Hazcams calibration is essential for stereo vision. The Hazcam optics are f-theta fish-eye lenses with a 120°×120° horizontal/vertical field of view (FOV) and a 170° diagonal FOV. They introduce significant distortion in images and the acquired images are quite warped, which makes conventional camera calibration algorithms no longer work well. A photogrammetric calibration method of geometric model for the type of optical fish-eye constructions is investigated in this paper. In the method, Hazcams model is represented by collinearity equations with interior orientation and exterior orientation parameters [1] [2]. For high-precision applications, the accurate calibration model is formulated with the radial symmetric distortion and the decentering distortion as well as parameters to model affinity and shear based on the fisheye deformation model [3] [4]. The proposed method has been applied to the stereo camera calibration system for Lunar Rover.

  11. Visibility of children behind 2010-2013 model year passenger vehicles using glances, mirrors, and backup cameras and parking sensors.

    Science.gov (United States)

    Kidd, David G; Brethwaite, Andrew

    2014-05-01

    This study identified the areas behind vehicles where younger and older children are not visible and measured the extent to which vehicle technologies improve visibility. Rear visibility of targets simulating the heights of a 12-15-month-old, a 30-36-month-old, and a 60-72-month-old child was assessed in 21 2010-2013 model year passenger vehicles with a backup camera or a backup camera plus parking sensor system. The average blind zone for a 12-15-month-old was twice as large as it was for a 60-72-month-old. Large SUVs had the worst rear visibility and small cars had the best. Increases in rear visibility provided by backup cameras were larger than the non-visible areas detected by parking sensors, but parking sensors detected objects in areas near the rear of the vehicle that were not visible in the camera or other fields of view. Overall, backup cameras and backup cameras plus parking sensors reduced the blind zone by around 90 percent on average and have the potential to prevent backover crashes if drivers use the technology appropriately. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Stereo-particle image velocimetry uncertainty quantification

    International Nuclear Information System (INIS)

    Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  13. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks

    Science.gov (United States)

    Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue

    2017-01-01

    Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions. PMID:28587304

  14. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Peng-Fei Wu

    2017-06-01

    Full Text Available Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI. To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions.

  15. Stereo-based Collision Avoidance System for Urban Traffic

    Science.gov (United States)

    Moriya, Takashi; Ishikawa, Naoto; Sasaki, Kazuyuki; Nakajima, Masato

    2002-11-01

    Numerous car accidents occur on urban road. However, researches done so far on driving assistance are subjecting highways whose environment is relatively simple and easy to handle, and new approach for urban settings is required. Our purpose is to extend its support to the following conditions in city traffic: the presence of obstacles such as pedestrians and telephone poles; the lane mark is not always drawn on a road; drivers may lack the sense of awareness of the lane mark. We propose a collision avoidance system, which can be applied to both highways and urban traffic environment. In our system, stereo cameras are set in front of a vehicle and the captured images are processed through a computer. We create a Projected Disparity Map (PDM) from stereo image pair, which is a disparity histogram taken along ordinate direction of obtained disparity image. When there is an obstacle in front, we can detect it by finding a peak appeared in the PDM. With a speed meter and a steering sensor, the stop distance and the radius of curvature of the self-vehicle are calculated, in order to set the observation-required area, which does not depend on lane marks, within a PDM. A danger level will be computed from the distance and the relative speed to the closest approaching object detected within the observation-required area. The method has been tested in urban traffic scenes and has shown to be effective for judging dangerous situation, and gives proper alarm to a driver.

  16. 3D panorama stereo visual perception centering on the observers

    International Nuclear Information System (INIS)

    Tang, YiPing; Zhou, Jingkai; Xu, Haitao; Xiang, Yun

    2015-01-01

    For existing three-dimensional (3D) laser scanners, acquiring geometry and color information of the objects simultaneously is difficult. Moreover, the current techniques cannot store, modify, and model the point clouds efficiently. In this work, we have developed a novel sensor system, which is called active stereo omni-directional vision sensor (ASODVS), to address those problems. ASODVS is an integrated system composed of a single-view omni-directional vision sensor and a mobile planar green laser generator platform. Driven by a stepper motor, the laser platform can move vertically along the axis of the ASODVS. During the scanning of the laser generators, the panoramic images of the environment are captured and the characteristics and space location information of the laser points are calculated accordingly. Based on the image information of the laser points, the 3D space can be reconstructed. Experimental results demonstrate that the proposed ASODVS system can measure and reconstruct the 3D space in real-time and with high quality. (paper)

  17. An overview of the stereo correlation and triangulation formulations used in DICe.

    Energy Technology Data Exchange (ETDEWEB)

    Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    This document provides a detailed overview of the stereo correlation algorithm and triangulation formulation used in the Digital Image Correlation Engine (DICe) to triangulate three dimensional motion in space given the image coordinates and camera calibration parameters.

  18. CameraCast: flexible access to remote video sensors

    Science.gov (United States)

    Kong, Jiantao; Ganev, Ivan; Schwan, Karsten; Widener, Patrick

    2007-01-01

    New applications like remote surveillance and online environmental or traffic monitoring are making it increasingly important to provide flexible and protected access to remote video sensor devices. Current systems use application-level codes like web-based solutions to provide such access. This requires adherence to user-level APIs provided by such services, access to remote video information through given application-specific service and server topologies, and that the data being captured and distributed is manipulated by third party service codes. CameraCast is a simple, easily used system-level solution to remote video access. It provides a logical device API so that an application can identically operate on local vs. remote video sensor devices, using its own service and server topologies. In addition, the application can take advantage of API enhancements to protect remote video information, using a capability-based model for differential data protection that offers fine grain control over the information made available to specific codes or machines, thereby limiting their ability to violate privacy or security constraints. Experimental evaluations of CameraCast show that the performance of accessing remote video information approximates that of accesses to local devices, given sufficient networking resources. High performance is also attained when protection restrictions are enforced, due to an efficient kernel-level realization of differential data protection.

  19. Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction

    Directory of Open Access Journals (Sweden)

    Yunsu Bok

    2014-11-01

    Full Text Available This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  20. Contributed Review: Camera-limits for wide-field magnetic resonance imaging with a nitrogen-vacancy spin sensor

    Science.gov (United States)

    Wojciechowski, Adam M.; Karadas, Mürsel; Huck, Alexander; Osterkamp, Christian; Jankuhn, Steffen; Meijer, Jan; Jelezko, Fedor; Andersen, Ulrik L.

    2018-03-01

    Sensitive, real-time optical magnetometry with nitrogen-vacancy centers in diamond relies on accurate imaging of small (≪10-2), fractional fluorescence changes across the diamond sample. We discuss the limitations on magnetic field sensitivity resulting from the limited number of photoelectrons that a camera can record in a given time. Several types of camera sensors are analyzed, and the smallest measurable magnetic field change is estimated for each type. We show that most common sensors are of a limited use in such applications, while certain highly specific cameras allow achieving nanotesla-level sensitivity in 1 s of a combined exposure. Finally, we demonstrate the results obtained with a lock-in camera that paves the way for real-time, wide-field magnetometry at the nanotesla level and with a micrometer resolution.

  1. METHODS OF STEREO PAIR IMAGES FORMATION WITH A GIVEN PARALLAX VALUE

    Directory of Open Access Journals (Sweden)

    Viktoriya G. Chafonova

    2014-11-01

    Full Text Available Two new complementary methods of stereo pair images formation are proposed. The first method is based on finding the maximum correlation between the gradient images of the left and right frames. The second one implies the finding of the shift between two corresponding key points of images for a stereo pair found by a detector of point features. These methods give the possibility to set desired values of vertical and horizontal parallaxes for the selected object in the image. Application of these methods makes it possible to measure the parallax values for the objects on the final stereo pair in pixels and / or the percentage of the total image size. It gives the possibility to predict the possible excesses in parallax values while stereo pair printing or projection. The proposed methods are easily automated after object selection, for which a predetermined value of the horizontal parallax will be exposed. Stereo pair images superposition using the key points takes less than one second. The method with correlation application requires a little bit more computing time, but makes it possible to control and superpose undivided anaglyph image. The proposed methods of stereo pair formation can find their application in programs for editing and processing images of a stereo pair, in the monitoring devices for shooting cameras and in the devices for video sequence quality assessment

  2. Quantitative analysis of digital outcrop data obtained from stereo-imagery using an emulator for the PanCam camera system for the ExoMars 2020 rover

    Science.gov (United States)

    Barnes, Robert; Gupta, Sanjeev; Gunn, Matt; Paar, Gerhard; Balme, Matt; Huber, Ben; Bauer, Arnold; Furya, Komyo; Caballo-Perucha, Maria del Pilar; Traxler, Chris; Hesina, Gerd; Ortner, Thomas; Banham, Steven; Harris, Jennifer; Muller, Jan-Peter; Tao, Yu

    2017-04-01

    A key focus of planetary rover missions is to use panoramic camera systems to image outcrops along rover traverses, in order to characterise their geology in search of ancient life. This data can be processed to create 3D point clouds of rock outcrops to be quantitatively analysed. The Mars Utah Rover Field Investigation (MURFI 2016) is a Mars Rover field analogue mission run by the UK Space Agency (UKSA) in collaboration with the Canadian Space Agency (CSA). It took place between 22nd October and 13th November 2016 and consisted of a science team based in Harwell, UK, and a field team including an instrumented Rover platform at the field site near Hanksville (Utah, USA). The Aberystwyth University PanCam Emulator 3 (AUPE3) camera system was used to collect stereo panoramas of the terrain the rover encountered during the field trials. Stereo-imagery processed in PRoViP is rendered as Ordered Point Clouds (OPCs) in PRo3D, enabling the user to zoom, rotate and translate the 3D outcrop model. Interpretations can be digitised directly onto the 3D surface, and simple measurements can be taken of the dimensions of the outcrop and sedimentary features, including grain size. Dip and strike of bedding planes, stratigraphic and sedimentological boundaries and fractures is calculated within PRo3D from mapped bedding contacts and fracture traces. Merging of rover-derived imagery with UAV and orbital datasets, to build semi-regional multi-resolution 3D models of the area of operations for immersive analysis and contextual understanding. In-simulation, AUPE3 was mounted onto the rover mast, collecting 16 stereo panoramas over 9 'sols'. 5 out-of-simulation datasets were collected in the Hanksville-Burpee Quarry. Stereo panoramas were processed using an automated pipeline and data transfer through an ftp server. PRo3D has been used for visualisation and analysis of this stereo data. Features of interest in the area could be annotated, and their distances between to the rover

  3. A versatile calibration procedure for portable coded aperture gamma cameras and RGB-D sensors

    Science.gov (United States)

    Paradiso, V.; Crivellaro, A.; Amgarou, K.; de Lanaute, N. Blanc; Fua, P.; Liénard, E.

    2018-04-01

    The present paper proposes a versatile procedure for the geometrical calibration of coded aperture gamma cameras and RGB-D depth sensors, using only one radioactive point source and a simple experimental set-up. Calibration data is then used for accurately aligning radiation images retrieved by means of the γ-camera with the respective depth images computed with the RGB-D sensor. The system resulting from such a combination is thus able to retrieve, automatically, the distance of radioactive hotspots by means of pixel-wise mapping between gamma and depth images. This procedure is of great interest for a wide number of applications, ranging from precise automatic estimation of the shape and distance of radioactive objects to Augmented Reality systems. Incidentally, the corresponding results validated the choice of a perspective design model for a coded aperture γ-camera.

  4. MonoSLAM: real-time single camera SLAM.

    Science.gov (United States)

    Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier

    2007-06-01

    We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.

  5. A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig

    Science.gov (United States)

    Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming

    2018-06-01

    This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.

  6. Automated Ground-based Time-lapse Camera Monitoring of West Greenland ice sheet outlet Glaciers: Challenges and Solutions

    Science.gov (United States)

    Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.

    2008-12-01

    Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous

  7. Time for a Change; Spirit's View on Sol 1843 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11973 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11973 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, full-circle view of the rover's surroundings during the 1,843rd Martian day, or sol, of Spirit's surface mission (March 10, 2009). South is in the middle. North is at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 36 centimeters downhill earlier on Sol 1854, but had not been able to get free of ruts in soft material that had become an obstacle to getting around the northeastern corner of the low plateau called 'Home Plate.' The Sol 1854 drive, following two others in the preceding four sols that also achieved little progress in the soft ground, prompted the rover team to switch to a plan of getting around Home Plate counterclockwise, instead of clockwise. The drive direction in subsequent sols was westward past the northern edge of Home Plate. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  8. Digital fundus image grading with the non-mydriatic Visucam(PRO NM) versus the FF450(plus) camera in diabetic retinopathy.

    Science.gov (United States)

    Neubauer, Aljoscha S; Rothschuh, Antje; Ulbig, Michael W; Blum, Marcus

    2008-03-01

    Grading diabetic retinopathy in clinical trials is frequently based on 7-field stereo photography of the fundus in diagnostic mydriasis. In terms of image quality, the FF450(plus) camera (Carl Zeiss Meditec AG, Jena, Germany) defines a high-quality reference. The aim of the study was to investigate if the fully digital fundus camera Visucam(PRO NM) could serve as an alternative in clinical trials requiring 7-field stereo photography. A total of 128 eyes of diabetes patients were enrolled in the randomized, controlled, prospective trial. Seven-field stereo photography was performed with the Visucam(PRO NM) and the FF450(plus) camera, in random order, both in diagnostic mydriasis. The resulting 256 image sets from the two camera systems were graded for retinopathy levels and image quality (on a scale of 1-5); both were anonymized and blinded to the image source. On FF450(plus) stereoscopic imaging, 20% of the patients had no or mild diabetic retinopathy (ETDRS level cameras regarding retinopathy levels (kappa 0.87) and macular oedema (kappa 0.80). In diagnostic mydriasis the image quality of the Visucam was graded slightly as better than that of the FF450(plus) (2.20 versus 2.41; p camera for applications and clinical trials requiring 7-field stereo photography.

  9. 4-mm-diameter three-dimensional imaging endoscope with steerable camera for minimally invasive surgery (3-D-MARVEL).

    Science.gov (United States)

    Bae, Sam Y; Korniski, Ronald J; Shearn, Michael; Manohara, Harish M; Shahinian, Hrayr

    2017-01-01

    High-resolution three-dimensional (3-D) imaging (stereo imaging) by endoscopes in minimally invasive surgery, especially in space-constrained applications such as brain surgery, is one of the most desired capabilities. Such capability exists at larger than 4-mm overall diameters. We report the development of a stereo imaging endoscope of 4-mm maximum diameter, called Multiangle, Rear-Viewing Endoscopic Tool (MARVEL) that uses a single-lens system with complementary multibandpass filter (CMBF) technology to achieve 3-D imaging. In addition, the system is endowed with the capability to pan from side-to-side over an angle of [Formula: see text], which is another unique aspect of MARVEL for such a class of endoscopes. The design and construction of a single-lens, CMBF aperture camera with integrated illumination to generate 3-D images, and the actuation mechanism built into it is summarized.

  10. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    Science.gov (United States)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  11. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    Science.gov (United States)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  12. Hyper thin 3D edge measurement of honeycomb core structures based on the triangular camera-projector layout & phase-based stereo matching.

    Science.gov (United States)

    Jiang, Hongzhi; Zhao, Huijie; Li, Xudong; Quan, Chenggen

    2016-03-07

    We propose a novel hyper thin 3D edge measurement technique to measure the profile of 3D outer envelope of honeycomb core structures. The width of the edges of the honeycomb core is less than 0.1 mm. We introduce a triangular layout design consisting of two cameras and one projector to measure hyper thin 3D edges and eliminate data interference from the walls. A phase-shifting algorithm and the multi-frequency heterodyne phase-unwrapping principle are applied for phase retrievals on edges. A new stereo matching method based on phase mapping and epipolar constraint is presented to solve correspondence searching on the edges and remove false matches resulting in 3D outliers. Experimental results demonstrate the effectiveness of the proposed method for measuring the 3D profile of honeycomb core structures.

  13. Streak camera imaging of single photons at telecom wavelength

    Science.gov (United States)

    Allgaier, Markus; Ansari, Vahid; Eigner, Christof; Quiring, Viktor; Ricken, Raimund; Donohue, John Matthew; Czerniuk, Thomas; Aßmann, Marc; Bayer, Manfred; Brecht, Benjamin; Silberhorn, Christine

    2018-01-01

    Streak cameras are powerful tools for temporal characterization of ultrafast light pulses, even at the single-photon level. However, the low signal-to-noise ratio in the infrared range prevents measurements on weak light sources in the telecom regime. We present an approach to circumvent this problem, utilizing an up-conversion process in periodically poled waveguides in Lithium Niobate. We convert single photons from a parametric down-conversion source in order to reach the point of maximum detection efficiency of commercially available streak cameras. We explore phase-matching configurations to apply the up-conversion scheme in real-world applications.

  14. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    Science.gov (United States)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of

  15. A zonal wavefront sensor with multiple detector planes

    Science.gov (United States)

    Pathak, Biswajit; Boruah, Bosanta R.

    2018-03-01

    A conventional zonal wavefront sensor estimates the wavefront from the data captured in a single detector plane using a single camera. In this paper, we introduce a zonal wavefront sensor which comprises multiple detector planes instead of a single detector plane. The proposed sensor is based on an array of custom designed plane diffraction gratings followed by a single focusing lens. The laser beam whose wavefront is to be estimated is incident on the grating array and one of the diffracted orders from each grating is focused on the detector plane. The setup, by employing a beam splitter arrangement, facilitates focusing of the diffracted beams on multiple detector planes where multiple cameras can be placed. The use of multiple cameras in the sensor can offer several advantages in the wavefront estimation. For instance, the proposed sensor can provide superior inherent centroid detection accuracy that can not be achieved by the conventional system. It can also provide enhanced dynamic range and reduced crosstalk performance. We present here the results from a proof of principle experimental arrangement that demonstrate the advantages of the proposed wavefront sensing scheme.

  16. A design of a high speed dual spectrometer by single line scan camera

    Science.gov (United States)

    Palawong, Kunakorn; Meemon, Panomsak

    2018-03-01

    A spectrometer that can capture two orthogonal polarization components of s light beam is demanded for polarization sensitive imaging system. Here, we describe the design and implementation of a high speed spectrometer for simultaneous capturing of two orthogonal polarization components, i.e. vertical and horizontal components, of light beam. The design consists of a polarization beam splitter, two polarization-maintain optical fibers, two collimators, a single line-scan camera, a focusing lens, and a reflection blaze grating. The alignment of two beam paths was designed to be symmetrically incident on the blaze side and reverse blaze side of reflection grating, respectively. The two diffracted beams were passed through the same focusing lens and focused on the single line-scan sensors of a CMOS camera. The two spectra of orthogonal polarization were imaged on 1000 pixels per spectrum. With the proposed setup, the amplitude and shape of the two detected spectra can be controlled by rotating the collimators. The technique for optical alignment of spectrometer will be presented and discussed. The two orthogonal polarization spectra can be simultaneously captured at a speed of 70,000 spectra per second. The high speed dual spectrometer can simultaneously detected two orthogonal polarizations, which is an important component for the development of polarization-sensitive optical coherence tomography. The performance of the spectrometer have been measured and analyzed.

  17. Human machine interface by using stereo-based depth extraction

    Science.gov (United States)

    Liao, Chao-Kang; Wu, Chi-Hao; Lin, Hsueh-Yi; Chang, Ting-Ting; Lin, Tung-Yang; Huang, Po-Kuan

    2014-03-01

    The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time­ of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.

  18. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera

    Directory of Open Access Journals (Sweden)

    Wenyan Ci

    2016-10-01

    Full Text Available Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.

  19. Opportunity's View After Drive on Sol 1806 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11816 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11816 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 60.86 meters (200 feet) on the 1,806th Martian day, or sol, of Opportunity's surface mission (Feb. 21, 2009). North is at the center; south at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). Engineers designed the Sol 1806 drive to be driven backwards as a strategy to redistribute lubricant in the rovers wheels. The right-front wheel had been showing signs of increased friction. The rover's position after the Sol 1806 drive was about 2 kilometer (1.2 miles) south southwest of Victoria Crater. Cumulative odometry was 14.74 kilometers (9.16 miles) since landing in January 2004, including 2.96 kilometers (1.84 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008). This view is presented as a cylindrical-perspective projection with geometric seam correction.

  20. Evaluation of stereoscopic video cameras synchronized with the movement of an operator's head on the teleoperation of the actual backhoe shovel

    Science.gov (United States)

    Minamoto, Masahiko; Matsunaga, Katsuya

    1999-05-01

    Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.

  1. Wide-Baseline Stereo-Based Obstacle Mapping for Unmanned Surface Vehicles

    Science.gov (United States)

    Mou, Xiaozheng; Wang, Han

    2018-01-01

    This paper proposes a wide-baseline stereo-based static obstacle mapping approach for unmanned surface vehicles (USVs). The proposed approach eliminates the complicated calibration work and the bulky rig in our previous binocular stereo system, and raises the ranging ability from 500 to 1000 m with a even larger baseline obtained from the motion of USVs. Integrating a monocular camera with GPS and compass information in this proposed system, the world locations of the detected static obstacles are reconstructed while the USV is traveling, and an obstacle map is then built. To achieve more accurate and robust performance, multiple pairs of frames are leveraged to synthesize the final reconstruction results in a weighting model. Experimental results based on our own dataset demonstrate the high efficiency of our system. To the best of our knowledge, we are the first to address the task of wide-baseline stereo-based obstacle mapping in a maritime environment. PMID:29617293

  2. Spirit's View Beside 'Home Plate' on Sol 1823 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11971 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11971 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,823rd Martian day, or sol, of Spirit's surface mission (Feb. 17, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The center of the view is toward the south-southwest. The rover had driven 7 meters (23 feet) eastward earlier on Sol 1823, part of maneuvering to get Spirit into a favorable position for climbing onto the low plateau called 'Home Plate.' However, after two driving attempts with negligible progress during the following three sols, the rover team changed its strategy for getting to destinations south of Home Plate. The team decided to drive Spirit at least partway around Home Plate, instead of ascending the northern edge and taking a shorter route across the top of the plateau. Layered rocks forming part of the northern edge of Home Plate can be seen near the center of the image. Rover wheel tracks are visible at the lower edge. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  3. Radiation-resistant optical sensors and cameras; Strahlungsresistente optische Sensoren und Kameras

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, G. [Imaging and Sensing Technology, Bonn (Germany)

    2008-02-15

    Introducing video technology, i.e. 'TV', specifically in the nuclear field was considered at an early stage. Possibilities to view spaces in nuclear facilities by means of radiation-resistant optical sensors or cameras are presented. These systems are to enable operators to monitor and control visually the processes occurring within such spaces. Camera systems are used, e.g., for remote surveillance of critical components in nuclear power plants and nuclear facilities, and thus contribute also to plant safety. A different application of optical systems resistant to radiation is in the visual inspection of, e.g., reactor pressure vessels and in tracing small parts inside a reactor. Camera systems are also employed in remote disassembly of radioactively contaminated old plants. Unfortunately, the niche market of radiation-resistant camera systems hardly gives rise to the expectation of research funds becoming available for the development of new radiation-resistant optical systems for picture taking and viewing. Current efforts are devoted mainly to improvements of image evaluation and image quality. Other items on the agendas of manufacturers are the reduction in camera size, which is limited by the size of picture tubes, and the increased use of commercial CCD cameras together with adequate shieldings or improved lenses. Consideration is also being given to the use of periphery equipment and to data transmission by LAN, WAN, or Internet links to remote locations. (orig.)

  4. Validation of Pleiades Tri-Stereo DSM in Urban Areas

    Directory of Open Access Journals (Sweden)

    Emmanouil Panagiotakis

    2018-03-01

    Full Text Available We present an accurate digital surface model (DSM derived from high-resolution Pleiades-1B 0.5 m panchromatic tri-stereo images, covering an area of 400 km2 over the Athens Metropolitan Area. Remote sensing and photogrammetry tools were applied, resulting in a 1 m × 1 m posting DSM over the study area. The accuracy of the produced DSM was evaluated against measured elevations by a differential Global Positioning System (d-GPS and a reference DSM provided by the National Cadaster and Mapping Agency S.A. Different combinations of stereo and tri-stereo images were used and tested on the quality of the produced DSM. Results revealed that the DSM produced by the tri-stereo analysis has a root mean square error (RMSE of 1.17 m in elevation, which lies within the best reported in the literature. On the other hand, DSMs derived by standard analysis of stereo-pairs from the same sensor were found to perform worse. Line profile data showed similar patterns between the reference and produced DSM. Pleiades tri-stereo high-quality DSM products have the necessary accuracy to support applications in the domains of urban planning, including climate change mitigation and adaptation, hydrological modelling, and natural hazards, being an important input for simulation models and morphological analysis at local scales.

  5. Layers of 'Cabo Frio' in 'Victoria Crater' (Stereo)

    Science.gov (United States)

    2006-01-01

    This view of 'Victoria crater' is looking southeast from 'Duck Bay' towards the dramatic promontory called 'Cabo Frio.' The small crater in the right foreground, informally known as 'Sputnik,' is about 20 meters (about 65 feet) away from the rover, the tip of the spectacular, layered, Cabo Frio promontory itself is about 200 meters (about 650 feet) away from the rover, and the exposed rock layers are about 15 meters (about 50 feet) tall. This is a red-blue stereo anaglyph generated from images taken by the panoramic camera (Pancam) on NASA's Mars Exploration Rover Opportunity during the rover's 952nd sol, or Martian day, (Sept. 28, 2006) using the camera's 430-nanometer filters.

  6. PHOTOMETRIC STEREO SHAPE-AND-ALBEDO-FROM-SHADING FOR PIXEL-LEVEL RESOLUTION LUNAR SURFACE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    W. C. Liu

    2017-07-01

    Full Text Available Shape and Albedo from Shading (SAfS techniques recover pixel-wise surface details based on the relationship between terrain slopes, illumination and imaging geometry, and the energy response (i.e., image intensity captured by the sensing system. Multiple images with different illumination geometries (i.e., photometric stereo can provide better SAfS surface reconstruction due to the increase in observations. Photometric stereo SAfS is suitable for detailed surface reconstruction of the Moon and other extra-terrestrial bodies due to the availability of photometric stereo and the less complex surface reflecting properties (i.e., albedo of the target bodies as compared to the Earth. Considering only one photometric stereo pair (i.e., two images, pixel-variant albedo is still a major obstacle to satisfactory reconstruction and it needs to be regulated by the SAfS algorithm. The illumination directional difference between the two images also becomes an important factor affecting the reconstruction quality. This paper presents a photometric stereo SAfS algorithm for pixel-level resolution lunar surface reconstruction. The algorithm includes a hierarchical optimization architecture for handling pixel-variant albedo and improving performance. With the use of Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC photometric stereo images, the reconstructed topography (i.e., the DEM is compared with the DEM produced independently by photogrammetric methods. This paper also addresses the effect of illumination directional difference in between one photometric stereo pair on the reconstruction quality of the proposed algorithm by both mathematical and experimental analysis. In this case, LROC NAC images under multiple illumination directions are utilized by the proposed algorithm for experimental comparison. The mathematical derivation suggests an illumination azimuthal difference of 90 degrees between two images is recommended to achieve

  7. Relaxor-PT Single Crystal Piezoelectric Sensors

    Directory of Open Access Journals (Sweden)

    Xiaoning Jiang

    2014-07-01

    Full Text Available Relaxor-PbTiO3 piezoelectric single crystals have been widely used in a broad range of electromechanical devices, including piezoelectric sensors, actuators, and transducers. This paper reviews the unique properties of these single crystals for piezoelectric sensors. Design, fabrication and characterization of various relaxor-PT single crystal piezoelectric sensors and their applications are presented and compared with their piezoelectric ceramic counterparts. Newly applicable fields and future trends of relaxor-PT sensors are also suggested in this review paper.

  8. Creating a distortion characterisation dataset for visual band cameras using fiducial markers

    CSIR Research Space (South Africa)

    Jermy, R

    2015-11-01

    Full Text Available . This will allow other researchers to perform the same steps and create better algorithms to accurately locate fiducial markers and calibrate cameras. A second dataset that can be used to assess the accuracy of the stereo vision of two calibrated cameras is also...

  9. New Sensors for Cultural Heritage Metric Survey: The ToF Cameras

    Directory of Open Access Journals (Sweden)

    Filiberto Chiabrando

    2011-12-01

    Full Text Available ToF cameras are new instruments based on CCD/CMOS sensors which measure distances instead of radiometry. The resulting point clouds show the same properties (both in terms of accuracy and resolution of the point clouds acquired by means of traditional LiDAR devices. ToF cameras are cheap instruments (less than 10.000 € based on video real time distance measurements and can represent an interesting alternative to the more expensive LiDAR instruments. In addition, the limited weight and dimensions of ToF cameras allow a reduction of some practical problems such as transportation and on-site management. Most of the commercial ToF cameras use the phase-shift method to measure distances. Due to the use of only one wavelength, most of them have limited range of application (usually about 5 or 10 m. After a brief description of the main characteristics of these instruments, this paper explains and comments the results of the first experimental applications of ToF cameras in Cultural Heritage 3D metric survey.  The possibility to acquire more than 30 frames/s and future developments of these devices in terms of use of more than one wavelength to overcome the ambiguity problem allow to foresee new interesting applications.

  10. Calibration of a dual-PTZ camera system for stereo vision

    Science.gov (United States)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2010-08-01

    In this paper, we propose a calibration process for the intrinsic and extrinsic parameters of dual-PTZ camera systems. The calibration is based on a complete definition of six coordinate systems fixed at the image planes, and the pan and tilt rotation axes of the cameras. Misalignments between estimated and ideal coordinates of image corners are formed into cost values to be solved by the Nelder-Mead simplex optimization method. Experimental results show that the system is able to obtain 3D coordinates of objects with a consistent accuracy of 1 mm when the distance between the dual-PTZ camera set and the objects are from 0.9 to 1.1 meters.

  11. A new method for non-invasive biomass determination based on stereo photogrammetry.

    Science.gov (United States)

    Syngelaki, Maria; Hardner, Matthias; Oberthuer, Patrick; Bley, Thomas; Schneider, Danilo; Lenk, Felix

    2018-03-01

    A novel, non-destructive method for the biomass estimation of biological samples on culture dishes was developed. To achieve this, a photogrammetric system, which consists of a digital single-lens reflex camera (DSLR), an illuminated platform where the culture dishes are positioned and an Arduino board which controls the capturing process, was constructed. The camera was mounted on a holder which set the camera at different title angles and the platform rotated, to capture images from different directions. A software, based on stereo photogrammetry, was developed for the three-dimensional (3D) reconstruction of the samples. The proof-of-concept was demonstrated in a series of experiments with plant tissue cultures and specifically with calli cultures of Salvia fruticosa and Ocimum basilicum. For a period of 14 days images of these cultures were acquired and 3D-reconstructions and volumetric data were obtained. The volumetric data correlated well with the experimental measurements and made the calculation of the specific growth rate, µ max , possible. The µ max value for S. fruticosa samples was 0.14 day -1 and for O. basilicum 0.16 day -1 . The developed method demonstrated the high potential of this photogrammetric approach in the biological sciences.

  12. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera

    Directory of Open Access Journals (Sweden)

    Thomas C. Wilkes

    2016-10-01

    Full Text Available Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements.

  13. Estimation of visual maps with a robot network equipped with vision sensors.

    Science.gov (United States)

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  14. Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors

    Directory of Open Access Journals (Sweden)

    Arturo Gil

    2010-05-01

    Full Text Available In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  15. Rigorous Photogrammetric Processing of CHANG'E-1 and CHANG'E-2 Stereo Imagery for Lunar Topographic Mapping

    Science.gov (United States)

    Di, K.; Liu, Y.; Liu, B.; Peng, M.

    2012-07-01

    Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.

  16. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.

    Science.gov (United States)

    Bulczak, David; Lambers, Martin; Kolb, Andreas

    2017-12-22

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.

  17. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects

    Directory of Open Access Journals (Sweden)

    David Bulczak

    2017-12-01

    Full Text Available In the last decade, Time-of-Flight (ToF range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF measurements for selected, purchasable materials in the near-infrared (NIR range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.

  18. Effects of illumination differences on photometric stereo shape-and-albedo-from-shading for precision lunar surface reconstruction

    Science.gov (United States)

    Chung Liu, Wai; Wu, Bo; Wöhler, Christian

    2018-02-01

    Photoclinometric surface reconstruction techniques such as Shape-from-Shading (SfS) and Shape-and-Albedo-from-Shading (SAfS) retrieve topographic information of a surface on the basis of the reflectance information embedded in the image intensity of each pixel. SfS or SAfS techniques have been utilized to generate pixel-resolution digital elevation models (DEMs) of the Moon and other planetary bodies. Photometric stereo SAfS analyzes images under multiple illumination conditions to improve the robustness of reconstruction. In this case, the directional difference in illumination between the images is likely to affect the quality of the reconstruction result. In this study, we quantitatively investigate the effects of illumination differences on photometric stereo SAfS. Firstly, an algorithm for photometric stereo SAfS is developed, and then, an error model is derived to analyze the relationships between the azimuthal and zenith angles of illumination of the images and the reconstruction qualities. The developed algorithm and error model were verified with high-resolution images collected by the Narrow Angle Camera (NAC) of the Lunar Reconnaissance Orbiter Camera (LROC). Experimental analyses reveal that (1) the resulting error in photometric stereo SAfS depends on both the azimuthal and the zenith angles of illumination as well as the general intensity of the images and (2) the predictions from the proposed error model are consistent with the actual slope errors obtained by photometric stereo SAfS using the LROC NAC images. The proposed error model enriches the theory of photometric stereo SAfS and is of significance for optimized lunar surface reconstruction based on SAfS techniques.

  19. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  20. NEW METHOD FOR THE CALIBRATION OF MULTI-CAMERA MOBILE MAPPING SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. P. Kersting

    2012-07-01

    Full Text Available Mobile Mapping Systems (MMS allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS, which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP: the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data. In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.

  1. New Method for the Calibration of Multi-Camera Mobile Mapping Systems

    Science.gov (United States)

    Kersting, A. P.; Habib, A.; Rau, J.

    2012-07-01

    Mobile Mapping Systems (MMS) allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS), which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP): the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data). In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO) where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.

  2. Multivariate Sensitivity Analysis of Time-of-Flight Sensor Fusion

    Science.gov (United States)

    Schwarz, Sebastian; Sjöström, Mårten; Olsson, Roger

    2014-09-01

    Obtaining three-dimensional scenery data is an essential task in computer vision, with diverse applications in various areas such as manufacturing and quality control, security and surveillance, or user interaction and entertainment. Dedicated Time-of-Flight sensors can provide detailed scenery depth in real-time and overcome short-comings of traditional stereo analysis. Nonetheless, they do not provide texture information and have limited spatial resolution. Therefore such sensors are typically combined with high resolution video sensors. Time-of-Flight Sensor Fusion is a highly active field of research. Over the recent years, there have been multiple proposals addressing important topics such as texture-guided depth upsampling and depth data denoising. In this article we take a step back and look at the underlying principles of ToF sensor fusion. We derive the ToF sensor fusion error model and evaluate its sensitivity to inaccuracies in camera calibration and depth measurements. In accordance with our findings, we propose certain courses of action to ensure high quality fusion results. With this multivariate sensitivity analysis of the ToF sensor fusion model, we provide an important guideline for designing, calibrating and running a sophisticated Time-of-Flight sensor fusion capture systems.

  3. A Real-Time Embedded System for Stereo Vision Preprocessing Using an FPGA

    DEFF Research Database (Denmark)

    Kjær-Nielsen, Anders; Jensen, Lars Baunegaard With; Sørensen, Anders Stengaard

    2008-01-01

    In this paper a low level vision processing node for use in existing IEEE 1394 camera setups is presented. The processing node is a small embedded system, that utilizes an FPGA to perform stereo vision preprocessing at rates limited by the bandwidth of IEEE 1394a (400Mbit). The system is used...

  4. Optomechanical System Development of the AWARE Gigapixel Scale Camera

    Science.gov (United States)

    Son, Hui S.

    Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.

  5. APPLICATION OF SENSOR FUSION TO IMPROVE UAV IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    S. Jabari

    2017-08-01

    Full Text Available Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan camera along with either a colour camera or a four-band multi-spectral (MS camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC. We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  6. Application of Sensor Fusion to Improve Uav Image Classification

    Science.gov (United States)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  7. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  8. An automated, open-source (NASA Ames Stereo Pipeline) workflow for mass production of high-resolution DEMs from commercial stereo satellite imagery: Application to mountain glacies in the contiguous US

    Science.gov (United States)

    Shean, D. E.; Arendt, A. A.; Whorton, E.; Riedel, J. L.; O'Neel, S.; Fountain, A. G.; Joughin, I. R.

    2016-12-01

    We adapted the open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline an automated processing workflow for 0.5 m GSD DigitalGlobe WorldView-1/2/3 and GeoEye-1 along-track and cross-track stereo image data. Output DEM products are posted at 2, 8, and 32 m with direct geolocation accuracy of process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We have leveraged these resources to produce dense time series and regional mosaics for the Earth's ice sheets. We are now processing and analyzing all available 2008-2016 commercial stereo DEMs over glaciers and perennial snowfields in the contiguous US. We are using these records to study long-term, interannual, and seasonal volume change and glacier mass balance. This analysis will provide a new assessment of regional climate change, and will offer basin-scale analyses of snowpack evolution and snow/ice melt runoff for water resource applications.

  9. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    Science.gov (United States)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  10. X-ray imaging using digital cameras

    Science.gov (United States)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  11. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Yu Lu

    2016-04-01

    Full Text Available A new compact large field of view (FOV multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second.

  12. Single Nanoparticle Plasmonic Sensors

    Directory of Open Access Journals (Sweden)

    Manish Sriram

    2015-10-01

    Full Text Available The adoption of plasmonic nanomaterials in optical sensors, coupled with the advances in detection techniques, has opened the way for biosensing with single plasmonic particles. Single nanoparticle sensors offer the potential to analyse biochemical interactions at a single-molecule level, thereby allowing us to capture even more information than ensemble measurements. We introduce the concepts behind single nanoparticle sensing and how the localised surface plasmon resonances of these nanoparticles are dependent upon their materials, shape and size. Then we outline the different synthetic approaches, like citrate reduction, seed-mediated and seedless growth, that enable the synthesis of gold and silver nanospheres, nanorods, nanostars, nanoprisms and other nanostructures with tunable sizes. Further, we go into the aspects related to purification and functionalisation of nanoparticles, prior to the fabrication of sensing surfaces. Finally, the recent developments in single nanoparticle detection, spectroscopy and sensing applications are discussed.

  13. TOPOGRAPHIC LOCAL ROUGHNESS EXTRACTION AND CALIBRATION OVER MARTIAN SURFACE BY VERY HIGH RESOLUTION STEREO ANALYSIS AND MULTI SENSOR DATA FUSION

    Directory of Open Access Journals (Sweden)

    J. R. Kim

    2012-08-01

    Full Text Available The planetary topography has been the main focus of the in-orbital remote sensing. In spite of the recent development in active and passive sensing technologies to reconstruct three dimensional planetary topography, the resolution limit of range measurement is theoretically and practically obvious. Therefore, the extraction of inner topographical height variation within a measurement spot is very challengeable and beneficial topic for the many application fields such as the identification of landform, Aeolian process analysis and the risk assessment of planetary lander. In this study we tried to extract the topographic height variation over martian surface so called local roughness with different approaches. One method is the employment of laser beam broadening effect and the other is the multi angle optical imaging. Especially, in both cases, the precise pre processing employing high accuracy DTM (Digital Terrain Model were introduced to minimise the possible errors. Since a processing routine to extract very high resolution DTMs up to 0.5–4m grid-spacing from HiRISE (High Resolution Imaging Science Experiment and 20–10m DTM from CTX (Context Camera stereo pair has been developed, it is now possible to calibrate the local roughness compared with the calculated height variation from very high resolution topographic products. Three testing areas were chosen and processed to extract local roughness with the co-registered multi sensor data sets. Even though, the extracted local roughness products are still showing the strong correlation with the topographic slopes, we demonstrated the potentials of the height variations extraction and calibration methods.

  14. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    Science.gov (United States)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  15. Disparity Map Generation from Illumination Variant Stereo Images Using Efficient Hierarchical Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Viral H. Borisagar

    2014-01-01

    Full Text Available A novel hierarchical stereo matching algorithm is presented which gives disparity map as output from illumination variant stereo pair. Illumination difference between two stereo images can lead to undesirable output. Stereo image pair often experience illumination variations due to many factors like real and practical situation, spatially and temporally separated camera positions, environmental illumination fluctuation, and the change in the strength or position of the light sources. Window matching and dynamic programming techniques are employed for disparity map estimation. Good quality disparity map is obtained with the optimized path. Homomorphic filtering is used as a preprocessing step to lessen illumination variation between the stereo images. Anisotropic diffusion is used to refine disparity map to give high quality disparity map as a final output. The robust performance of the proposed approach is suitable for real life circumstances where there will be always illumination variation between the images. The matching is carried out in a sequence of images representing the same scene, however in different resolutions. The hierarchical approach adopted decreases the computation time of the stereo matching problem. This algorithm can be helpful in applications like robot navigation, extraction of information from aerial surveys, 3D scene reconstruction, and military and security applications. Similarity measure SAD is often sensitive to illumination variation. It produces unacceptable disparity map results for illumination variant left and right images. Experimental results show that our proposed algorithm produces quality disparity maps for both wide range of illumination variant and invariant stereo image pair.

  16. Opportunity's View After Long Drive on Sol 1770 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11791 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11791 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 104 meters (341 feet) on the 1,770th Martian day, or sol, of Opportunity's surface mission (January 15, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). Prior to the Sol 1770 drive, Opportunity had driven less than a meter since Sol 1713 (November 17, 2008), while it used the tools on its robotic arm first to examine a meteorite called 'Santorini' during weeks of restricted communication while the sun was nearly in line between Mars and Earth, then to examine bedrock and soil targets near Santorini. The rover's position after the Sol 1770 drive was about 1.1 kilometer (two-thirds of a mile) south southwest of Victoria Crater. Cumulative odometry was 13.72 kilometers (8.53 miles) since landing in January 2004, including 1.94 kilometers (1.21 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008). This view is presented as a cylindrical-perspective projection with geometric seam correction.

  17. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  18. RIGOROUS PHOTOGRAMMETRIC PROCESSING OF CHANG'E-1 AND CHANG'E-2 STEREO IMAGERY FOR LUNAR TOPOGRAPHIC MAPPING

    Directory of Open Access Journals (Sweden)

    K. Di

    2012-07-01

    Full Text Available Chang'E-1(CE-1 and Chang'E-2(CE-2 are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1 refining EOPs by correcting the attitude angle bias, 2 refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model and DOM (Digital Ortho Map are automatically generated.

  19. Assessment and Calibration of a RGB-D Camera (Kinect v2 Sensor Towards a Potential Use for Close-Range 3D Modeling

    Directory of Open Access Journals (Sweden)

    Elise Lachat

    2015-10-01

    Full Text Available In the last decade, RGB-D cameras - also called range imaging cameras - have known a permanent evolution. Because of their limited cost and their ability to measure distances at a high frame rate, such sensors are especially appreciated for applications in robotics or computer vision. The Kinect v1 (Microsoft release in November 2010 promoted the use of RGB-D cameras, so that a second version of the sensor arrived on the market in July 2014. Since it is possible to obtain point clouds of an observed scene with a high frequency, one could imagine applying this type of sensors to answer to the need for 3D acquisition. However, due to the technology involved, some questions have to be considered such as, for example, the suitability and accuracy of RGB-D cameras for close range 3D modeling. In that way, the quality of the acquired data represents a major axis. In this paper, the use of a recent Kinect v2 sensor to reconstruct small objects in three dimensions has been investigated. To achieve this goal, a survey of the sensor characteristics as well as a calibration approach are relevant. After an accuracy assessment of the produced models, the benefits and drawbacks of Kinect v2 compared to the first version of the sensor and then to photogrammetry are discussed.

  20. A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application.

    Science.gov (United States)

    Vivacqua, Rafael; Vassallo, Raquel; Martins, Felipe

    2017-10-16

    Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.

  1. The prototype cameras for trans-Neptunian automatic occultation survey

    Science.gov (United States)

    Wang, Shiang-Yu; Ling, Hung-Hsu; Hu, Yen-Sang; Geary, John C.; Chang, Yin-Chang; Chen, Hsin-Yo; Amato, Stephen M.; Huang, Pin-Jie; Pratlong, Jerome; Szentgyorgyi, Andrew; Lehner, Matthew; Norton, Timothy; Jorden, Paul

    2016-08-01

    The Transneptunian Automated Occultation Survey (TAOS II) is a three robotic telescope project to detect the stellar occultation events generated by TransNeptunian Objects (TNOs). TAOS II project aims to monitor about 10000 stars simultaneously at 20Hz to enable statistically significant event rate. The TAOS II camera is designed to cover the 1.7 degrees diameter field of view of the 1.3m telescope with 10 mosaic 4.5k×2k CMOS sensors. The new CMOS sensor (CIS 113) has a back illumination thinned structure and high sensitivity to provide similar performance to that of the back-illumination thinned CCDs. Due to the requirements of high performance and high speed, the development of the new CMOS sensor is still in progress. Before the science arrays are delivered, a prototype camera is developed to help on the commissioning of the robotic telescope system. The prototype camera uses the small format e2v CIS 107 device but with the same dewar and also the similar control electronics as the TAOS II science camera. The sensors, mounted on a single Invar plate, are cooled to the operation temperature of about 200K as the science array by a cryogenic cooler. The Invar plate is connected to the dewar body through a supporting ring with three G10 bipods. The control electronics consists of analog part and a Xilinx FPGA based digital circuit. One FPGA is needed to control and process the signal from a CMOS sensor for 20Hz region of interests (ROI) readout.

  2. Euratom experience with video surveillance - Single camera and other non-multiplexed

    International Nuclear Information System (INIS)

    Otto, P.; Cozier, T.; Jargeac, B.; Castets, J.P.; Wagner, H.G.; Chare, P.; Roewer, V.

    1991-01-01

    The Euratom Safeguards Directorate (ESD) has been using a number of single camera video systems (Ministar, MIVS, DCS) and non-multiplexed multi-camera systems (Digiquad) for routine safeguards surveillance applications during the last four years. This paper describes aspects of system design and considerations relevant for installation. It reports on system reliability and performance and presents suggestions on future improvements

  3. A pilot project combining multispectral proximal sensors and digital cameras for monitoring tropical pastures

    Science.gov (United States)

    Handcock, Rebecca N.; Gobbett, D. L.; González, Luciano A.; Bishop-Hurley, Greg J.; McGavin, Sharon L.

    2016-08-01

    Timely and accurate monitoring of pasture biomass and ground cover is necessary in livestock production systems to ensure productive and sustainable management. Interest in the use of proximal sensors for monitoring pasture status in grazing systems has increased, since data can be returned in near real time. Proximal sensors have the potential for deployment on large properties where remote sensing may not be suitable due to issues such as spatial scale or cloud cover. There are unresolved challenges in gathering reliable sensor data and in calibrating raw sensor data to values such as pasture biomass or vegetation ground cover, which allow meaningful interpretation of sensor data by livestock producers. Our goal was to assess whether a combination of proximal sensors could be reliably deployed to monitor tropical pasture status in an operational beef production system, as a precursor to designing a full sensor deployment. We use this pilot project to (1) illustrate practical issues around sensor deployment, (2) develop the methods necessary for the quality control of the sensor data, and (3) assess the strength of the relationships between vegetation indices derived from the proximal sensors and field observations across the wet and dry seasons. Proximal sensors were deployed at two sites in a tropical pasture on a beef production property near Townsville, Australia. Each site was monitored by a Skye SKR-four-band multispectral sensor (every 1 min), a digital camera (every 30 min), and a soil moisture sensor (every 1 min), each of which were operated over 18 months. Raw data from each sensor was processed to calculate multispectral vegetation indices. The data capture from the digital cameras was more reliable than the multispectral sensors, which had up to 67 % of data discarded after data cleaning and quality control for technical issues related to the sensor design, as well as environmental issues such as water incursion and insect infestations. We recommend

  4. Calculation for simulation of archery goal value using a web camera and ultrasonic sensor

    Science.gov (United States)

    Rusjdi, Darma; Abdurrasyid, Wulandari, Dewi Arianti

    2017-08-01

    Development of the device simulator digital indoor archery-based embedded systems as a solution to the limitations of the field or open space is adequate, especially in big cities. Development of the device requires simulations to calculate the value of achieving the target based on the approach defined by the parabolic motion variable initial velocity and direction of motion of the arrow reaches the target. The simulator device should be complemented with an initial velocity measuring device using ultrasonic sensors and measuring direction of the target using a digital camera. The methodology uses research and development of application software from modeling and simulation approach. The research objective to create simulation applications calculating the value of the achievement of the target arrows. Benefits as a preliminary stage for the development of the simulator device of archery. Implementation of calculating the value of the target arrows into the application program generates a simulation game of archery that can be used as a reference development of the digital archery simulator in a room with embedded systems using ultrasonic sensors and web cameras. Applications developed with the simulation calculation comparing the outer radius of the circle produced a camera from a distance of three meters.

  5. UCalMiCeL – UNIFIED INTRINSIC AND EXTRINSIC CALIBRATION OF A MULTI-CAMERA-SYSTEM AND A LASERSCANNER

    Directory of Open Access Journals (Sweden)

    M. Hillemann

    2017-08-01

    Full Text Available Unmanned Aerial Vehicle (UAV with adequate sensors enable new applications in the scope between expensive, large-scale, aircraftcarried remote sensing and time-consuming, small-scale, terrestrial surveyings. To perform these applications, cameras and laserscanners are a good sensor combination, due to their complementary properties. To exploit this sensor combination the intrinsics and relative poses of the individual cameras and the relative poses of the cameras and the laserscanners have to be known. In this manuscript, we present a calibration methodology for the Unified Intrinsic and Extrinsic Calibration of a Multi-Camera-System and a Laserscanner (UCalMiCeL. The innovation of this methodology, which is an extension to the calibration of a single camera to a line laserscanner, is an unifying bundle adjustment step to ensure an optimal calibration of the entire sensor system. We use generic camera models, including pinhole, omnidirectional and fisheye cameras. For our approach, the laserscanner and each camera have to share a joint field of view, whereas the fields of view of the individual cameras may be disjoint. The calibration approach is tested with a sensor system consisting of two fisheye cameras and a line laserscanner with a range measuring accuracy of 30 mm. We evaluate the estimated relative poses between the cameras quantitatively by using an additional calibration approach for Multi-Camera-Systems based on control points which are accurately measured by a motion capture system. In the experiments, our novel calibration method achieves a relative pose estimation with a deviation below 1.8° and 6.4 mm.

  6. REAL-TIME CAMERA GUIDANCE FOR 3D SCENE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    F. Schindler

    2012-07-01

    Full Text Available We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  7. Microgeometry capture and RGB albedo estimation by photometric stereo without demosaicing

    Science.gov (United States)

    Quéau, Yvain; Pizenberg, Mathieu; Durou, Jean-Denis; Cremers, Daniel

    2017-03-01

    We present a photometric stereo-based system for retrieving the RGB albedo and the fine-scale details of an opaque surface. In order to limit specularities, the system uses a controllable diffuse illumination, which is calibrated using a dedicated procedure. In addition, we rather handle RAW, non-demosaiced RGB images, which both avoids uncontrolled operations on the sensor data and simplifies the estimation of the albedo in each color channel and of the normals. We finally show on real-world examples the potential of photometric stereo for the 3D-reconstruction of very thin structures from a wide variety of surfaces.

  8. Stereo-vision-based terrain mapping for off-road autonomous navigation

    Science.gov (United States)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-05-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  9. RGB–D terrain perception and dense mapping for legged robots

    Directory of Open Access Journals (Sweden)

    Belter Dominik

    2016-03-01

    Full Text Available This paper addresses the issues of unstructured terrain modeling for the purpose of navigation with legged robots. We present an improved elevation grid concept adopted to the specific requirements of a small legged robot with limited perceptual capabilities. We propose an extension of the elevation grid update mechanism by incorporating a formal treatment of the spatial uncertainty. Moreover, this paper presents uncertainty models for a structured light RGB-D sensor and a stereo vision camera used to produce a dense depth map. The model for the uncertainty of the stereo vision camera is based on uncertainty propagation from calibration, through undistortion and rectification algorithms, allowing calculation of the uncertainty of measured 3D point coordinates. The proposed uncertainty models were used for the construction of a terrain elevation map using the Videre Design STOC stereo vision camera and Kinect-like range sensors. We provide experimental verification of the proposed mapping method, and a comparison with another recently published terrain mapping method for walking robots.

  10. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  11. Stereo vision techniques for telescience

    Science.gov (United States)

    Hewett, S.

    1990-02-01

    The Botanic Experiment is one of the pilot experiments in the Telescience Test Bed program at the ESTEC research and technology center of the European Space Agency. The aim of the Telescience Test Bed is to develop the techniques required by an experimenter using a ground based work station for remote control, monitoring, and modification of an experiment operating on a space platform. The purpose of the Botanic Experiment is to examine the growth of seedlings under various illumination conditions with a video camera from a number of viewpoints throughout the duration of the experiment. This paper describes the Botanic Experiment and the points addressed in developing a stereo vision software package to extract quantitative information about the seedlings from the recorded video images.

  12. Identifying and tracking pedestrians based on sensor fusion and motion stability predictions.

    Science.gov (United States)

    Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Maria; de la Escalera, Arturo

    2010-01-01

    The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle.

  13. Identifying and Tracking Pedestrians Based on Sensor Fusion and Motion Stability Predictions

    Directory of Open Access Journals (Sweden)

    Arturo de la Escalera

    2010-08-01

    Full Text Available The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem and dense disparity maps and u-v disparity (vision subsystem. Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle.

  14. Comparison of Stereo-PIV and Plenoptic-PIV Measurements on the Wake of a Cylinder in NASA Ground Test Facilities.

    Science.gov (United States)

    Fahringer, Timothy W.; Thurow, Brian S.; Humphreys, William M., Jr.; Bartram, Scott M.

    2017-01-01

    A series of comparison experiments have been performed using a single-camera plenoptic PIV measurement system to ascertain the systems performance capabilities in terms of suitability for use in NASA ground test facilities. A proof-of-concept demonstration was performed in the Langley Advanced Measurements and Data Systems Branch 13-inch (33- cm) Subsonic Tunnel to examine the wake of a series of cylinders at a Reynolds number of 2500. Accompanying the plenoptic-PIV measurements were an ensemble of complementary stereo-PIV measurements. The stereo-PIV measurements were used as a truth measurement to assess the ability of the plenoptic-PIV system to capture relevant 3D/3C flow field features in the cylinder wake. Six individual tests were conducted as part of the test campaign using three different cylinder diameters mounted in two orientations in the tunnel test section. This work presents a comparison of measurements with the cylinders mounted horizontally (generating a 2D flow field in the x-y plane). Results show that in general the plenoptic-PIV measurements match those produced by the stereo-PIV system. However, discrepancies were observed in extracted pro les of the fuctuating velocity components. It is speculated that spatial smoothing of the vector fields in the stereo-PIV system could account for the observed differences. Nevertheless, the plenoptic-PIV system performed extremely well at capturing the flow field features of interest and can be considered a viable alternative to traditional PIV systems in smaller NASA ground test facilities with limited optical access.

  15. Ultrahigh-speed, high-sensitivity color camera with 300,000-pixel single CCD

    Science.gov (United States)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Ohtake, H.; Kurita, T.; Tanioka, K.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Etoh, T. G.

    2007-01-01

    We have developed an ultrahigh-speed, high-sensitivity portable color camera with a new 300,000-pixel single CCD. The 300,000-pixel CCD, which has four times the number of pixels of our initial model, was developed by seamlessly joining two 150,000-pixel CCDs. A green-red-green-blue (GRGB) Bayer filter is used to realize a color camera with the single-chip CCD. The camera is capable of ultrahigh-speed video recording at up to 1,000,000 frames/sec, and small enough to be handheld. We also developed a technology for dividing the CCD output signal to enable parallel, highspeed readout and recording in external memory; this makes possible long, continuous shots up to 1,000 frames/second. As a result of an experiment, video footage was imaged at an athletics meet. Because of high-speed shooting, even detailed movements of athletes' muscles were captured. This camera can capture clear slow-motion videos, so it enables previously impossible live footage to be imaged for various TV broadcasting programs.

  16. Pre-selecting muon events in the camera server of the ASTRI telescopes for the Cherenkov Telescope Array

    Science.gov (United States)

    Maccarone, Maria C.; Mineo, Teresa; Capalbi, Milvia; Conforti, Vito; Coffaro, Martina

    2016-08-01

    The Cherenkov Telescope Array (CTA) represents the next generation of ground based observatories for very high energy gamma ray astronomy. The CTA will consist of two arrays at two different sites, one in the northern and one in the southern hemisphere. The current CTA design foresees, in the southern site, the installation of many tens of imaging atmospheric Cherenkov telescopes of three different classes, namely large, medium, and small, so defined in relation to their mirror area; the northern hemisphere array would consist of few tens of the two larger telescope types. The telescopes will be equipped with cameras composed either of photomultipliers or silicon photomultipliers, and with different trigger and read-out electronics. In such a scenario, several different methods will be used for the telescopes' calibration. Nevertheless, the optical throughput of any CTA telescope, independently of its type, can be calibrated analyzing the characteristic image produced by local atmospheric highly energetic muons that induce the emission of Cherenkov light which is imaged as a ring onto the focal plane if their impact point is relatively close to the telescope optical axis. Large sized telescopes would be able to detect useful muon events under stereo coincidence and such stereo muon events will be directly addressed to the central CTA array data acquisition pipeline to be analyzed. For the medium and small sized telescopes, due to their smaller mirror area and large inter-telescope distance, the stereo coincidence rate will tend to zero; nevertheless, muon events will be detected by single telescopes that must therefore be able to identify them as possible useful calibration candidates, even if no stereo coincidence is available. This is the case for the ASTRI telescopes, proposed as pre-production units of the small size array of the CTA, which are able to detect muon events during regular data taking without requiring any dedicated trigger. We present two fast

  17. A flexible calibration method for laser displacement sensors based on a stereo-target

    International Nuclear Information System (INIS)

    Zhang, Jie; Sun, Junhua; Liu, Zhen; Zhang, Guangjun

    2014-01-01

    Laser displacement sensors (LDSs) are widely used in online measurement owing to their characteristics of non-contact, high measurement speed, etc. However, existing calibration methods for LDSs based on the traditional triangulation measurement model are time-consuming and tedious to operate. In this paper, a calibration method for LDSs based on a vision measurement model of the LDS is presented. According to the constraint relationships of the model parameters, the calibration is implemented by freely moving a stereo-target at least twice in the field of view of the LDS. Both simulation analyses and real experiments were conducted. Experimental results demonstrate that the calibration method achieves an accuracy of 0.044 mm within the measurement range of about 150 mm. Compared to traditional calibration methods, the proposed method has no special limitation on the relative position of the LDS and the target. The linearity approximation of the measurement model in the calibration is not needed, and thus the measurement range is not limited in the linearity range. It is easy and quick to implement the calibration for the LDS. The method can be applied in wider fields. (paper)

  18. WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves

    Science.gov (United States)

    Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise

    2017-10-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.

  19. A Framework for Obstacles Avoidance of Humanoid Robot Using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2013-04-01

    Full Text Available In this paper, we propose a framework for multiple moving obstacles avoidance strategy using stereo vision for humanoid robot in indoor environment. We assume that this model of humanoid robot is used as a service robot to deliver a cup to customer from starting point to destination point. We have successfully developed and introduced three main modules to recognize faces, to identify multiple moving obstacles and to initiate a maneuver. A group of people who are walking will be tracked as multiple moving obstacles. Predefined maneuver to avoid obstacles is applied to robot because the limitation of view angle from stereo camera to detect multiple obstacles. The contribution of this research is a new method for multiple moving obstacles avoidance strategy with Bayesian approach using stereo vision based on the direction and speed of obstacles. Depth estimation is used to obtain distance calculation between obstacles and the robot. We present the results of the experiment of the humanoid robot called Gatotkoco II which is used our proposed method and evaluate its performance. The proposed moving obstacles avoidance strategy was tested empirically and proved effective for humanoid robot.

  20. Sensor Data Fusion

    DEFF Research Database (Denmark)

    Plascencia, Alfredo; Stepán, Petr

    2006-01-01

    The main contribution of this paper is to present a sensor fusion approach to scene environment mapping as part of a Sensor Data Fusion (SDF) architecture. This approach involves combined sonar array with stereo vision readings.  Sonar readings are interpreted using probability density functions...

  1. Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks

    Science.gov (United States)

    Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min

    2015-10-01

    Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.

  2. Alcohol sensor based on single-mode-multimode-single-mode fiber structure

    Science.gov (United States)

    Mefina Yulias, R.; Hatta, A. M.; Sekartedjo, Sekartedjo

    2016-11-01

    Alcohol sensor based on Single-mode -Multimode-Single-mode (SMS) fiber structure is being proposed to sense alcohol concentration in alcohol-water mixtures. This proposed sensor uses refractive index sensing as its sensing principle. Fabricated SMS fiber structure had 40 m of multimode length. With power input -6 dBm and wavelength 1550 nm, the proposed sensor showed good response with sensitivity 1,983 dB per % v/v with measurement range 05 % v/v and measurement span 0,5% v/v.

  3. Long-term tracking of multiple interacting pedestrians using a single camera

    CSIR Research Space (South Africa)

    Keaikitse, M

    2014-11-01

    Full Text Available interacting pedestrians using a single camera Mogomotsi Keaikitse∗, Willie Brink† and Natasha Govender∗ ∗Modelling and Digital Sciences, Council for Scientific and Industrial Research, Pretoria, South Africa †Department of Mathematical Sciences, Stellenbosch...-identified and their tracks extended. Standard, publicly available data sets are used to test the system. I. INTRODUCTION Closed circuit cameras are becoming widespread and preva- lent in cities and towns around the world, indicating that surveillance is an important issue...

  4. Passive perception system for day/night autonomous off-road navigation

    Science.gov (United States)

    Rankin, Arturo L.; Bergh, Charles F.; Goldberg, Steven B.; Bellutta, Paolo; Huertas, Andres; Matthies, Larry H.

    2005-05-01

    Passive perception of terrain features is a vital requirement for military related unmanned autonomous vehicle operations, especially under electromagnetic signature management conditions. As a member of Team Raptor, the Jet Propulsion Laboratory developed a self-contained passive perception system under the DARPA funded PerceptOR program. An environmentally protected forward-looking sensor head was designed and fabricated in-house to straddle an off-the-shelf pan-tilt unit. The sensor head contained three color cameras for multi-baseline daytime stereo ranging, a pair of cooled mid-wave infrared cameras for nighttime stereo ranging, and supporting electronics to synchronize captured imagery. Narrow-baseline stereo provided improved range data density in cluttered terrain, while wide-baseline stereo provided more accurate ranging for operation at higher speeds in relatively open areas. The passive perception system processed stereo images and outputted over a local area network terrain maps containing elevation, terrain type, and detected hazards. A novel software architecture was designed and implemented to distribute the data processing on a 533MHz quad 7410 PowerPC single board computer under the VxWorks real-time operating system. This architecture, which is general enough to operate on N processors, has been subsequently tested on Pentium-based processors under Windows and Linux, and a Sparc based-processor under Unix. The passive perception system was operated during FY04 PerceptOR program evaluations at Fort A. P. Hill, Virginia, and Yuma Proving Ground, Arizona. This paper discusses the Team Raptor passive perception system hardware and software design, implementation, and performance, and describes a road map to faster and improved passive perception.

  5. Tank selection for Light Duty Utility Arm (LDUA) system hot testing in a single shell tank

    Energy Technology Data Exchange (ETDEWEB)

    Bhatia, P.K.

    1995-01-31

    The purpose of this report is to recommend a single shell tank in which to hot test the Light Duty Utility Arm (LDUA) for the Tank Waste Remediation System (TWRS) in Fiscal Year 1996. The LDUA is designed to utilize a 12 inch riser. During hot testing, the LDUA will deploy two end effectors (a High Resolution Stereoscopic Video Camera System and a Still/Stereo Photography System mounted on the end of the arm`s tool interface plate). In addition, three other systems (an Overview Video System, an Overview Stereo Video System, and a Topographic Mapping System) will be independently deployed and tested through 4 inch risers.

  6. Tank selection for Light Duty Utility Arm (LDUA) system hot testing in a single shell tank

    International Nuclear Information System (INIS)

    Bhatia, P.K.

    1995-01-01

    The purpose of this report is to recommend a single shell tank in which to hot test the Light Duty Utility Arm (LDUA) for the Tank Waste Remediation System (TWRS) in Fiscal Year 1996. The LDUA is designed to utilize a 12 inch riser. During hot testing, the LDUA will deploy two end effectors (a High Resolution Stereoscopic Video Camera System and a Still/Stereo Photography System mounted on the end of the arm's tool interface plate). In addition, three other systems (an Overview Video System, an Overview Stereo Video System, and a Topographic Mapping System) will be independently deployed and tested through 4 inch risers

  7. Panoramic stereo sphere vision

    Science.gov (United States)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  8. Parallelised photoacoustic signal acquisition using a Fabry-Perot sensor and a camera-based interrogation scheme

    Science.gov (United States)

    Saeb Gilani, T.; Villringer, C.; Zhang, E.; Gundlach, H.; Buchmann, J.; Schrader, S.; Laufer, J.

    2018-02-01

    Tomographic photoacoustic (PA) images acquired using a Fabry-Perot (FP) based scanner offer high resolution and image fidelity but can result in long acquisition times due to the need for raster scanning. To reduce the acquisition times, a parallelised camera-based PA signal detection scheme is developed. The scheme is based on using a sCMOScamera and FPI sensors with high homogeneity of optical thickness. PA signals were acquired using the camera-based setup and the signal to noise ratio (SNR) was measured. A comparison of the SNR of PA signal detected using 1) a photodiode in a conventional raster scanning detection scheme and 2) a sCMOS camera in parallelised detection scheme is made. The results show that the parallelised interrogation scheme has the potential to provide high speed PA imaging.

  9. A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera's performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  10. Droplet deposition measurement with high-speed camera and novel high-speed liquid film sensor with high spatial resolution

    International Nuclear Information System (INIS)

    Damsohn, M.; Prasser, H.-M.

    2011-01-01

    Highlights: → Development of a sensor for time- and space-resolved droplet deposition in annular flow. → Experimental measurement of droplet deposition in horizontal annular flow to compare readings of the sensor with images of a high-speed camera when droplets are depositing unto the liquid film. → Self-adaptive signal filter based on autoregression to separate droplet impacts in the sensor signal from waves of liquid films. - Abstract: A sensor based on the electrical conductance method is presented for the measurement of dynamic liquid films in two-phase flow. The so called liquid film sensor consists of a matrix with 64 x 16 measuring points, a spatial resolution of 3.12 mm and a time resolution of 10 kHz. Experiments in a horizontal co-current air-water film flow were conducted to test the capability of the sensor to detect droplet deposition from the gas core onto the liquid film. The experimental setup is equipped with the liquid film sensor and a high speed camera (HSC) recording the droplet deposition with a sampling rate of 10 kHz simultaneously. In some experiments the recognition of droplet deposition on the sensor is enhanced by marking the droplets with higher electrical conductivity. The comparison between the HSC and the sensor shows, that the sensor captures the droplet deposition above a certain droplet diameter. The impacts of droplet deposition can be filtered from the wavy structures respectively conductivity changes of the liquid film using a filter algorithm based on autoregression. The results will be used to locally measure droplet deposition e.g. in the proximity of spacers in a subchannel geometry.

  11. JAVA Stereo Display Toolkit

    Science.gov (United States)

    Edmonds, Karina

    2008-01-01

    This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.

  12. STEREO-IMPACT Education and Public Outreach: Sharing STEREO Science

    Science.gov (United States)

    Craig, N.; Peticolas, L. M.; Mendez, B. J.

    2005-12-01

    The Solar TErrestrial RElations Observatory (STEREO) is scheduled for launch in Spring 2006. STEREO will study the Sun with two spacecrafts in orbit around it and on either side of Earth. The primary science goal is to understand the nature and consequences of Coronal Mass Ejections (CMEs). Despite their importance, scientists don't fully understand the origin and evolution of CMEs, nor their structure or extent in interplanetary space. STEREO's unique 3-D images of the structure of CMEs will enable scientists to determine their fundamental nature and origin. We will discuss the Education and Public Outreach (E/PO) program for the In-situ Measurement of Particles And CME Transients (IMPACT) suite of instruments aboard the two crafts and give examples of upcoming activities, including NASA's Sun-Earth day events, which are scheduled to coincide with a total solar eclipse in March. This event offers a good opportunity to engage the public in STEREO science, because an eclipse allows one to see the solar corona from where CMEs erupt. STEREO's connection to space weather lends itself to close partnerships with the Sun-Earth Connection Education Forum (SECEF), The Exploratorium, and UC Berkeley's Center for New Music and Audio Technologies to develop informal science programs for science centers, museum visitors, and the public in general. We will also discuss our teacher workshops locally in California and also at annual conferences such as those of the National Science Teachers Association. Such workshops often focus on magnetism and its connection to CMEs and Earth's magnetic field, leading to the questions STEREO scientists hope to answer. The importance of partnerships and coordination in working in an instrument E/PO program that is part of a bigger NASA mission with many instrument suites and many PIs will be emphasized. The Education and Outreach Porgram is funded by NASA's SMD.

  13. High Resolution Stereo Camera (HRSC) on Mars Express - a decade of PR/EO activities at Freie Universität Berlin

    Science.gov (United States)

    Balthasar, Heike; Dumke, Alexander; van Gasselt, Stephan; Gross, Christoph; Michael, Gregory; Musiol, Stefanie; Neu, Dominik; Platz, Thomas; Rosenberg, Heike; Schreiner, Björn; Walter, Sebastian

    2014-05-01

    Since 2003 the High Resolution Stereo Camera (HRSC) experiment on the Mars Express mission is in orbit around Mars. First images were sent to Earth on January 14th, 2004. The goal-oriented HRSC data dissemination and the transparent representation of the associated work and results are the main aspects that contributed to the success in the public perception of the experiment. The Planetary Sciences and Remote Sensing Group at Freie Universität Berlin (FUB) offers both, an interactive web based data access, and browse/download options for HRSC press products [www.fu-berlin.de/planets]. Close collaborations with exhibitors as well as print and digital media representatives allows for regular and directed dissemination of, e.g., conventional imagery, orbital/synthetic surface epipolar images, video footage, and high-resolution displays. On a monthly basis we prepare press releases in close collaboration with the European Space Agency (ESA) and the German Aerospace Center (DLR) [http://www.geo.fu-berlin.de/en/geol/fachrichtungen/planet/press/index.html]. A release comprises panchromatic, colour, anaglyph, and perspective views of a scene taken from an HRSC image of the Martian surface. In addition, a context map and descriptive texts in English and German are provided. More sophisticated press releases include elaborate animations and simulated flights over the Martian surface, perspective views of stereo data combined with colour and high resolution, mosaics, and perspective views of data mosaics. Altogether 970 high quality PR products and 15 movies were created at FUB during the last decade and published via FUB/DLR/ESA platforms. We support educational outreach events, as well as permanent and special exhibitions. Examples for that are the yearly "Science Fair", where special programs for kids are offered, and the exhibition "Mars Mission and Vision" which is on tour until 2015 through 20 German towns, showing 3-D movies, surface models, and images of the HRSC

  14. Application of colon capsule endoscopy (CCE to evaluate the whole gastrointestinal tract: a comparative study of single-camera and dual-camera analysis

    Directory of Open Access Journals (Sweden)

    Remes-Troche JM

    2013-09-01

    Full Text Available José María Remes-Troche,1 Victoria Alejandra Jiménez-García,2 Josefa María García-Montes,2 Pedro Hergueta-Delgado,2 Federico Roesch-Dietlen,1 Juan Manuel Herrerías-Gutiérrez2 1Digestive Physiology and Motility Lab, Medical Biological Research Institute, Universidad Veracruzana, Veracruz, México; 2Gastroenterology Service, Virgen Macarena University Hospital, Seville, Spain Background and study aims: Colon capsule endoscopy (CCE was developed for the evaluation of colorectal pathology. In this study, our aim was to assess if a dual-camera analysis using CCE allows better evaluation of the whole gastrointestinal (GI tract compared to a single-camera analysis. Patients and methods: We included 21 patients (12 males, mean age 56.20 years submitted for a CCE examination. After standard colon preparation, the colon capsule endoscope (PillCam Colon™ was swallowed after reinitiation from its “sleep” mode. Four physicians performed the analysis: two reviewed both video streams at the same time (dual-camera analysis; one analyzed images from one side of the device (“camera 1”; and the other reviewed the opposite side (“camera 2”. We compared numbers of findings from different parts of the entire GI tract and level of agreement among reviewers. Results: A complete evaluation of the GI tract was possible in all patients. Dual-camera analysis provided 16% and 5% more findings compared to camera 1 and camera 2 analysis, respectively. Overall agreement was 62.7% (kappa = 0.44, 95% CI: 0.373–0.510. Esophageal (kappa = 0.611 and colorectal (kappa = 0.595 findings had a good level of agreement, while small bowel (kappa = 0.405 showed moderate agreement. Conclusion: The use of dual-camera analysis with CCE for the evaluation of the GI tract is feasible and detects more abnormalities when compared with single-camera analysis. Keywords: capsule endoscopy, colon, gastrointestinal tract, small bowel

  15. Study of the Integration of LIDAR and Photogrammetric Datasets by in Situ Camera Calibration and Integrated Sensor Orientation

    Science.gov (United States)

    Mitishita, E.; Costa, F.; Martins, M.

    2017-05-01

    Photogrammetric and Lidar datasets should be in the same mapping or geodetic frame to be used simultaneously in an engineering project. Nowadays direct sensor orientation is a common procedure used in simultaneous photogrammetric and Lidar surveys. Although the direct sensor orientation technologies provide a high degree of automation process due to the GNSS/INS technologies, the accuracies of the results obtained from the photogrammetric and Lidar surveys are dependent on the quality of a group of parameters that models accurately the user conditions of the system at the moment the job is performed. This paper shows the study that was performed to verify the importance of the in situ camera calibration and Integrated Sensor Orientation without control points to increase the accuracies of the photogrammetric and LIDAR datasets integration. The horizontal and vertical accuracies of photogrammetric and Lidar datasets integration by photogrammetric procedure improved significantly when the Integrated Sensor Orientation (ISO) approach was performed using Interior Orientation Parameter (IOP) values estimated from the in situ camera calibration. The horizontal and vertical accuracies, estimated by the Root Mean Square Error (RMSE) of the 3D discrepancies from the Lidar check points, increased around of 37% and 198% respectively.

  16. WASS: an open-source stereo processing pipeline for sea waves 3D reconstruction

    Science.gov (United States)

    Bergamasco, Filippo; Benetazzo, Alvise; Torsello, Andrea; Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro

    2017-04-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community. In fact, recent advances of both computer vision algorithms and CPU processing power can now allow the study of the spatio-temporal wave fields with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner so that the implementation of a 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well-tested software package that automates the steps from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS, a completely Open-Source stereo processing pipeline for sea waves 3D reconstruction, available at http://www.dais.unive.it/wass/. Our tool completely automates the recovery of dense point clouds from stereo images by providing three main functionalities. First, WASS can automatically recover the extrinsic parameters of the stereo rig (up to scale) so that no delicate calibration has to be performed on the field. Second, WASS implements a fast 3D dense stereo reconstruction procedure so that an accurate 3D point cloud can be computed from each stereo pair. We rely on the well-consolidated OpenCV library both for the image stereo rectification and disparity map recovery. Lastly, a set of 2D and 3D filtering techniques both on the disparity map and the produced point cloud are implemented to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface (examples are sun-glares, large white-capped areas, fog and water areosol, etc). Developed to be as fast as possible, WASS

  17. Three-dimensional sensing methodology combining stereo vision and phase-measuring profilometry based on dynamic programming

    Science.gov (United States)

    Lee, Hyunki; Kim, Min Young; Moon, Jeon Il

    2017-12-01

    Phase measuring profilometry and moiré methodology have been widely applied to the three-dimensional shape measurement of target objects, because of their high measuring speed and accuracy. However, these methods suffer from inherent limitations called a correspondence problem, or 2π-ambiguity problem. Although a kind of sensing method to combine well-known stereo vision and phase measuring profilometry (PMP) technique simultaneously has been developed to overcome this problem, it still requires definite improvement for sensing speed and measurement accuracy. We propose a dynamic programming-based stereo PMP method to acquire more reliable depth information and in a relatively small time period. The proposed method efficiently fuses information from two stereo sensors in terms of phase and intensity simultaneously based on a newly defined cost function of dynamic programming. In addition, the important parameters are analyzed at the view point of the 2π-ambiguity problem and measurement accuracy. To analyze the influence of important hardware and software parameters related to the measurement performance and to verify its efficiency, accuracy, and sensing speed, a series of experimental tests were performed with various objects and sensor configurations.

  18. Spirit Near 'Stapledon' on Sol 1802 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11781 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11781 NASA Mars Exploration Rover Spirit used its navigation camera for the images assembled into this stereo, full-circle view of the rover's surroundings during the 1,802nd Martian day, or sol, (January 26, 2009) of Spirit's mission on the surface of Mars. South is at the center; north is at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Spirit had driven down off the low plateau called 'Home Plate' on Sol 1782 (January 6, 2009) after spending 12 months on a north-facing slope on the northern edge of Home Plate. The position on the slope (at about the 9-o'clock position in this view) tilted Spirit's solar panels toward the sun, enabling the rover to generate enough electricity to survive its third Martian winter. Tracks at about the 11-o'clock position of this panorama can be seen leading back to that 'Winter Haven 3' site from the Sol 1802 position about 10 meters (33 feet) away. For scale, the distance between the parallel wheel tracks is about one meter (40 inches). Where the receding tracks bend to the left, a circular pattern resulted from Spirit turning in place at a soil target informally named 'Stapledon' after William Olaf Stapledon, a British philosopher and science-fiction author who lived from 1886 to 1950. Scientists on the rover team suspected that the soil in that area might have a high concentration of silica, resembling a high-silica soil patch discovered east of Home Plate in 2007. Bright material visible in the track furthest to the right was examined with Spirit's alpha partical X-ray spectrometer and found, indeed, to be rich in silica. The team laid plans to drive Spirit from this Sol 1802 location back up

  19. Ultrahigh sensitivity endoscopic camera using a new CMOS image sensor: providing with clear images under low illumination in addition to fluorescent images.

    Science.gov (United States)

    Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio

    2014-11-01

    We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.

  20. FieldSAFE

    DEFF Research Database (Denmark)

    Kragh, Mikkel Fly; Christiansen, Peter; Laursen, Morten Stigaard

    2017-01-01

    In this paper, we present a novel multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 hours of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal...... camera, web camera, 360-degree camera, lidar, and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present including humans, mannequin dolls, rocks, barrels, buildings, vehicles, and vegetation. All obstacles have ground truth object labels...

  1. Optimize Etching Based Single Mode Fiber Optic Temperature Sensor

    OpenAIRE

    Ajay Kumar; Dr. Pramod Kumar

    2014-01-01

    This paper presents a description of etching process for fabrication single mode optical fiber sensors. The process of fabrication demonstrates an optimized etching based method to fabricate single mode fiber (SMF) optic sensors in specified constant time and temperature. We propose a single mode optical fiber based temperature sensor, where the temperature sensing region is obtained by etching its cladding diameter over small length to a critical value. It is observed that th...

  2. Bayes filter modification for drivability map estimation with observations from stereo vision

    Science.gov (United States)

    Panchenko, Aleksei; Prun, Viktor; Turchenkov, Dmitri

    2017-02-01

    Reconstruction of a drivability map for a moving vehicle is a well-known research topic in applied robotics. Here creating such a map for an autonomous truck on a generally planar surface containing separate obstacles is considered. The source of measurements for the truck is a calibrated pair of cameras. The stereo system detects and reconstructs several types of objects, such as road borders, other vehicles, pedestrians and general tall objects or highly saturated objects (e.g. road cone). For creating a robust mapping module we use a modification of Bayes filtering, which introduces some novel techniques for occupancy map update step. Specifically, our modified version becomes applicable to the presence of false positive measurement errors, stereo shading and obstacle occlusion. We implemented the technique and achieved real-time 15 FPS computations on an industrial shake proof PC. Our real world experiments show the positive effect of the filtering step.

  3. STUDY OF THE INTEGRATION OF LIDAR AND PHOTOGRAMMETRIC DATASETS BY IN SITU CAMERA CALIBRATION AND INTEGRATED SENSOR ORIENTATION

    Directory of Open Access Journals (Sweden)

    E. Mitishita

    2017-05-01

    Full Text Available Photogrammetric and Lidar datasets should be in the same mapping or geodetic frame to be used simultaneously in an engineering project. Nowadays direct sensor orientation is a common procedure used in simultaneous photogrammetric and Lidar surveys. Although the direct sensor orientation technologies provide a high degree of automation process due to the GNSS/INS technologies, the accuracies of the results obtained from the photogrammetric and Lidar surveys are dependent on the quality of a group of parameters that models accurately the user conditions of the system at the moment the job is performed. This paper shows the study that was performed to verify the importance of the in situ camera calibration and Integrated Sensor Orientation without control points to increase the accuracies of the photogrammetric and LIDAR datasets integration. The horizontal and vertical accuracies of photogrammetric and Lidar datasets integration by photogrammetric procedure improved significantly when the Integrated Sensor Orientation (ISO approach was performed using Interior Orientation Parameter (IOP values estimated from the in situ camera calibration. The horizontal and vertical accuracies, estimated by the Root Mean Square Error (RMSE of the 3D discrepancies from the Lidar check points, increased around of 37% and 198% respectively.

  4. Dynamic Human Body Modeling Using a Single RGB Camera.

    Science.gov (United States)

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-03-18

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.

  5. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study

    International Nuclear Information System (INIS)

    Suenaga, Hideyuki; Tran, Huy Hoang; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Takato, Tsuyoshi

    2015-01-01

    This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject’s anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Accurate registration of the volunteer’s anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications. The online version of this article (doi:10.1186/s12880-015-0089-5) contains supplementary material, which is available to authorized users

  6. Infrared Range Sensor Array for 3D Sensing in Robotic Applications

    Directory of Open Access Journals (Sweden)

    Yongtae Do

    2013-04-01

    Full Text Available This paper presents the design and testing of multiple infrared range detectors arranged in a two-dimensional (2D array. The proposed system can collect the sparse three-dimensional (3D data of objects and surroundings for robotics applications. Three kinds of tasks are considered using the system: detecting obstacles that lie ahead of a mobile robot, sensing the ground profile for the safe navigation of a mobile robot, and sensing the shape and position of an object on a conveyor belt for pickup by a robot manipulator. The developed system is potentially a simple alternative to high-resolution (and expensive 3D sensing systems, such as stereo cameras or laser scanners. In addition, the system can provide shape information about target objects and surroundings that cannot be obtained using simple ultrasonic sensors. Laboratory prototypes of the system were built with nine infrared range sensors arranged in a 3×3 array and test results confirmed the validity of system.

  7. Wide baseline stereo matching based on double topological relationship consistency

    Science.gov (United States)

    Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang

    2009-07-01

    Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.

  8. A Photometric Stereo Using Re-Projected Images for Active Stereo Vision System

    Directory of Open Access Journals (Sweden)

    Keonhwa Jung

    2017-10-01

    Full Text Available In optical 3D shape measurement, stereo vision with structured light can measure 3D scan data with high accuracy and is used in many applications, but fine surface detail is difficult to obtain. On the other hand, photometric stereo can capture surface details but has disadvantages, in that its 3D data accuracy drops and it requires multiple light sources. When the two measurement methods are combined, more accurate 3D scan data and detailed surface features can be obtained at the same time. In this paper, we present a 3D optical measurement technique that uses re-projection of images to implement photometric stereo without an external light source. 3D scan data is enhanced by combining normal vector from this photometric stereo method, and the result is evaluated with the ground truth.

  9. Camera-based speckle noise reduction for 3-D absolute shape measurements.

    Science.gov (United States)

    Zhang, Hao; Kuschmierz, Robert; Czarske, Jürgen; Fischer, Andreas

    2016-05-30

    Simultaneous position and velocity measurements enable absolute 3-D shape measurements of fast rotating objects for instance for monitoring the cutting process in a lathe. Laser Doppler distance sensors enable simultaneous position and velocity measurements with a single sensor head by evaluating the scattered light signals. The superposition of several speckles with equal Doppler frequency but random phase on the photo detector results in an increased velocity and shape uncertainty, however. In this paper, we present a novel image evaluation method that overcomes the uncertainty limitations due to the speckle effect. For this purpose, the scattered light is detected with a camera instead of single photo detectors. Thus, the Doppler frequency from each speckle can be evaluated separately and the velocity uncertainty decreases with the square root of the number of camera lines. A reduction of the velocity uncertainty by the order of one magnitude is verified by the numerical simulations and experimental results, respectively. As a result, the measurement uncertainty of the absolute shape is not limited by the speckle effect anymore.

  10. High resolution hybrid optical and acoustic sea floor maps (Invited)

    Science.gov (United States)

    Roman, C.; Inglis, G.

    2013-12-01

    This abstract presents a method for creating hybrid optical and acoustic sea floor reconstructions at centimeter scale grid resolutions with robotic vehicles. Multibeam sonar and stereo vision are two common sensing modalities with complementary strengths that are well suited for data fusion. We have recently developed an automated two stage pipeline to create such maps. The steps can be broken down as navigation refinement and map construction. During navigation refinement a graph-based optimization algorithm is used to align 3D point clouds created with both the multibeam sonar and stereo cameras. The process combats the typical growth in navigation error that has a detrimental affect on map fidelity and typically introduces artifacts at small grid sizes. During this process we are able to automatically register local point clouds created by each sensor to themselves and to each other where they overlap in a survey pattern. The process also estimates the sensor offsets, such as heading, pitch and roll, that describe how each sensor is mounted to the vehicle. The end results of the navigation step is a refined vehicle trajectory that ensures the points clouds from each sensor are consistently aligned, and the individual sensor offsets. In the mapping step, grid cells in the map are selectively populated by choosing data points from each sensor in an automated manner. The selection process is designed to pick points that preserve the best characteristics of each sensor and honor some specific map quality criteria to reduce outliers and ghosting. In general, the algorithm selects dense 3D stereo points in areas of high texture and point density. In areas where the stereo vision is poor, such as in a scene with low contrast or texture, multibeam sonar points are inserted in the map. This process is automated and results in a hybrid map populated with data from both sensors. Additional cross modality checks are made to reject outliers in a robust manner. The final

  11. Research on three-dimensional reconstruction method based on binocular vision

    Science.gov (United States)

    Li, Jinlin; Wang, Zhihui; Wang, Minjun

    2018-03-01

    As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.

  12. Massive stereo-based DTM production for Mars on cloud computers

    Science.gov (United States)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Xiong, Si-Ting; Putri, A. R. D.; Walter, S. H. G.; Veitch-Michaelis, J.; Yershov, V.

    2018-05-01

    Digital Terrain Model (DTM) creation is essential to improving our understanding of the formation processes of the Martian surface. Although there have been previous demonstrations of open-source or commercial planetary 3D reconstruction software, planetary scientists are still struggling with creating good quality DTMs that meet their science needs, especially when there is a requirement to produce a large number of high quality DTMs using "free" software. In this paper, we describe a new open source system to overcome many of these obstacles by demonstrating results in the context of issues found from experience with several planetary DTM pipelines. We introduce a new fully automated multi-resolution DTM processing chain for NASA Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) and High Resolution Imaging Science Experiment (HiRISE) stereo processing, called the Co-registration Ames Stereo Pipeline (ASP) Gotcha Optimised (CASP-GO), based on the open source NASA ASP. CASP-GO employs tie-point based multi-resolution image co-registration, and Gotcha sub-pixel refinement and densification. CASP-GO pipeline is used to produce planet-wide CTX and HiRISE DTMs that guarantee global geo-referencing compliance with respect to High Resolution Stereo Colour imaging (HRSC), and thence to the Mars Orbiter Laser Altimeter (MOLA); providing refined stereo matching completeness and accuracy. All software and good quality products introduced in this paper are being made open-source to the planetary science community through collaboration with NASA Ames, United States Geological Survey (USGS) and the Jet Propulsion Laboratory (JPL), Advanced Multi-Mission Operations System (AMMOS) Planetary Data System (PDS) Pipeline Service (APPS-PDS4), as well as browseable and visualisable through the iMars web based Geographic Information System (webGIS) system.

  13. Comparison of different "along the track" high resolution satellite stereo-pair for DSM extraction

    Science.gov (United States)

    Nikolakopoulos, Konstantinos G.

    2013-10-01

    The possibility to create DEM from stereo pairs is based on the Pythagoras theorem and on the principles of photogrammetry that are applied to aerial photographs stereo pairs for the last seventy years. The application of these principles to digital satellite stereo data was inherent in the first satellite missions. During the last decades the satellite stereo-pairs were acquired across the track in different days (SPOT, ERS etc.). More recently the same-date along the track stereo-data acquisition seems to prevail (Terra ASTER, SPOT5 HRS, Cartosat, ALOS Prism) as it reduces the radiometric image variations (refractive effects, sun illumination, temporal changes) and thus increases the correlation success rate in any image matching.Two of the newest satellite sensors with stereo collection capability is Cartosat and ALOS Prism. Both of them acquire stereopairs along the track with a 2,5m spatial resolution covering areas of 30X30km. In this study we compare two different satellite stereo-pair collected along the track for DSM creation. The first one is created from a Cartosat stereopair and the second one from an ALOS PRISM triplet. The area of study is situated in Chalkidiki Peninsula, Greece. Both DEMs were created using the same ground control points collected with a Differential GPS. After a first control for random or systematic errors a statistical analysis was done. Points of certified elevation have been used to estimate the accuracy of these two DSMs. The elevation difference between the different DEMs was calculated. 2D RMSE, correlation and the percentile value were also computed and the results are presented.

  14. Application of stereo photogrammetric techniques for measuring African Elephants

    Directory of Open Access Journals (Sweden)

    A. J Hall-Martin

    1979-12-01

    Full Text Available Measurements of shoulder height and back length of African elephants were obtained by means of stereo photogrammetric techniques. A pair of Zeiss UMK 10/1318 cameras, mounted on a steel frame on the back of a vehicle, were used to photograph the elephants in the Addo Elephant National Park, Republic of South Africa. Several modifications of normal photogrammetry procedure applicable to the field situation (eg. control points and the computation of results (eg. relative orientation are briefly mentioned. Six elephants were immobilised after being photographed and the measurements obtained from them agreed within a range of 1 cm-10 cm with the photogrammetric measurements.

  15. Using DSLR cameras in digital holography

    Science.gov (United States)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  16. A Featured-Based Strategy for Stereovision Matching in Sensors with Fish-Eye Lenses for Forest Environments

    Science.gov (United States)

    Herrera, Pedro Javier; Pajares, Gonzalo; Guijarro, Maria; Ruz, José J.; Cruz, Jesús M.; Montes, Fernando

    2009-01-01

    This paper describes a novel feature-based stereovision matching process based on a pair of omnidirectional images in forest stands acquired with a stereovision sensor equipped with fish-eye lenses. The stereo analysis problem consists of the following steps: image acquisition, camera modelling, feature extraction, image matching and depth determination. Once the depths of significant points on the trees are obtained, the growing stock volume can be estimated by considering the geometrical camera modelling, which is the final goal. The key steps are feature extraction and image matching. This paper is devoted solely to these two steps. At a first stage a segmentation process extracts the trunks, which are the regions used as features, where each feature is identified through a set of attributes of properties useful for matching. In the second step the features are matched based on the application of the following four well known matching constraints, epipolar, similarity, ordering and uniqueness. The combination of the segmentation and matching processes for this specific kind of sensors make the main contribution of the paper. The method is tested with satisfactory results and compared against the human expert criterion. PMID:22303134

  17. Aerodigitalni senzori - LH Systems ADS 40 / Airborne digital sensors: LH Systems ADS 40

    Directory of Open Access Journals (Sweden)

    Marko Pejić

    2004-01-01

    Full Text Available U radu su prezentovane osnove prikupljanja prostornih podataka metodom daljinske detekcije i klasičnim fotogrametrijskim metodom. Ukazano je na kompromis između dva metoda koji nudi digitalna aerokamera. Kompanija LH Systems proizvela je digitalnu aerokameru ADS 40 koja nudi sasvim nov koncept prikupljanja prostornih podataka. Sistem kamere obezbeđuje panhromatske i trodimenzionalne informacije koristeći tri CCD linije i opciono još pet linija iz multispektralnog opsega. Kamera skenira teren sa prostornom rezolucijom od 25 cm, površine od 300 kvadratnih kilometara, uz vreme trajanja leta koje je nešto kraće od jednog sata. / This paper presents basics of collecting spatial data with remote sensing and the classical photogrammetric method. A compromise between two methods, offered by a digital aero camera, is also suggested. The LH Systems has produced a new camera concept called Airborne Digital Sensor (ADS 40 which uses a new way of collecting spatial data. The camera system provides panchromatic and stereo information using three CCD lines and up to five more lines for multispectral imagery. The performance of the camera allows a three dimensional and multispectral image with a ground sample distance of 25 cm for an area of 300 square miles within a flight time shorter than one hour.

  18. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    Science.gov (United States)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  19. A single magnetic nanocomposite cilia force sensor

    KAUST Repository

    Alfadhel, Ahmed; Khan, Mohammed Asadullah; Cardoso, Susana; Kosel, Jü rgen

    2016-01-01

    The advancements in fields like robotics and medicine continuously require improvements of sensor devices and more engagement of cooperative sensing technologies. For example, instruments such as tweezers with sensitive force sensory heads could provide the ability to sense a variety of physical quantities in real time, such as the amount and direction of the force applied or the texture of the gripped object. Force sensors with such abilities could be great solutions toward the development of smart surgical tools. In this work, a unique force sensor that can be integrated at the tips of robotic arms or surgical tools is reported. The force sensor consists of a single bioinspired, permanent magnetic and highly elastic nanocomposite cilia integrated on a magnetic field sensing element. The nanocomposite is prepared from permanent magnetic nanowires incorporated into the highly elastic polydimethylsiloxane. We demonstrate the potential of this concept by performing several experiments to show the performance of the force sensor. The developed sensor element has a 200 μm in diameter single cilium with 1:5 aspect ratio and shows a detection range up to 1 mN with a sensitivity of 1.6 Ω/mN and a resolution of 31 μN. The simple fabrication process of the sensor allows easy optimization of the sensor performance to meet the needs of different applications.

  20. A single magnetic nanocomposite cilia force sensor

    KAUST Repository

    Alfadhel, Ahmed

    2016-04-20

    The advancements in fields like robotics and medicine continuously require improvements of sensor devices and more engagement of cooperative sensing technologies. For example, instruments such as tweezers with sensitive force sensory heads could provide the ability to sense a variety of physical quantities in real time, such as the amount and direction of the force applied or the texture of the gripped object. Force sensors with such abilities could be great solutions toward the development of smart surgical tools. In this work, a unique force sensor that can be integrated at the tips of robotic arms or surgical tools is reported. The force sensor consists of a single bioinspired, permanent magnetic and highly elastic nanocomposite cilia integrated on a magnetic field sensing element. The nanocomposite is prepared from permanent magnetic nanowires incorporated into the highly elastic polydimethylsiloxane. We demonstrate the potential of this concept by performing several experiments to show the performance of the force sensor. The developed sensor element has a 200 μm in diameter single cilium with 1:5 aspect ratio and shows a detection range up to 1 mN with a sensitivity of 1.6 Ω/mN and a resolution of 31 μN. The simple fabrication process of the sensor allows easy optimization of the sensor performance to meet the needs of different applications.

  1. Noninvasive, three-dimensional full-field body sensor for surface deformation monitoring of human body in vivo

    Science.gov (United States)

    Chen, Zhenning; Shao, Xinxing; He, Xiaoyuan; Wu, Jialin; Xu, Xiangyang; Zhang, Jinlin

    2017-09-01

    Noninvasive, three-dimensional (3-D), full-field surface deformation measurements of the human body are important for biomedical investigations. We proposed a 3-D noninvasive, full-field body sensor based on stereo digital image correlation (stereo-DIC) for surface deformation monitoring of the human body in vivo. First, by applying an improved water-transfer printing (WTP) technique to transfer optimized speckle patterns onto the skin, the body sensor was conveniently and harmlessly fabricated directly onto the human body. Then, stereo-DIC was used to achieve 3-D noncontact and noninvasive surface deformation measurements. The accuracy and efficiency of the proposed body sensor were verified and discussed by considering different complexions. Moreover, the fabrication of speckle patterns on human skin, which has always been considered a challenging problem, was shown to be feasible, effective, and harmless as a result of the improved WTP technique. An application of the proposed stereo-DIC-based body sensor was demonstrated by measuring the pulse wave velocity of human carotid artery.

  2. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  3. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  4. VPython: Python plus Animations in Stereo 3D

    Science.gov (United States)

    Sherwood, Bruce

    2004-03-01

    Python is a modern object-oriented programming language. VPython (http://vpython.org) is a combination of Python (http://python.org), the Numeric module from LLNL (http://www.pfdubois.com/numpy), and the Visual module created by David Scherer, all of which have been under continuous development as open source projects. VPython makes it easy to write programs that generate real-time, navigable 3D animations. The Visual module includes a set of 3D objects (sphere, cylinder, arrow, etc.), tools for creating other shapes, and support for vector algebra. The 3D renderer runs in a parallel thread, and animations are produced as a side effect of computations, freeing the programmer to concentrate on the physics. Applications include educational and research visualization. In the Fall of 2003 Hugh Fisher at the Australian National University, John Zelle at Wartburg College, and I contributed to a new stereo capability of VPython. By adding a single statement to an existing VPython program, animations can be viewed in true stereo 3D. One can choose several modes: active shutter glasses, passive polarized glasses, or colored glasses (e.g. red-cyan). The talk will demonstrate the new stereo capability and discuss the pros and cons of various schemes for display of stereo 3D for a large audience. Supported in part by NSF grant DUE-0237132.

  5. An automated, open-source pipeline for mass production of digital elevation models (DEMs) from very-high-resolution commercial stereo satellite imagery

    Science.gov (United States)

    Shean, David E.; Alexandrov, Oleg; Moratto, Zachary M.; Smith, Benjamin E.; Joughin, Ian R.; Porter, Claire; Morin, Paul

    2016-06-01

    We adapted the automated, open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline a processing workflow for ˜0.5 m ground sample distance (GSD) DigitalGlobe WorldView-1 and WorldView-2 along-track stereo image data, with an overview of ASP capabilities, an evaluation of ASP correlator options, benchmark test results, and two case studies of DEM accuracy. Output DEM products are posted at ˜2 m with direct geolocation accuracy of process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We are leveraging these resources to produce dense time series and regional mosaics for the Earth's polar regions.

  6. Stereo matching and view interpolation based on image domain triangulation.

    Science.gov (United States)

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  7. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2015-08-01

    Full Text Available Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method.

  8. Radiometric Cross-Calibration of GAOFEN-1 Wfv Cameras with LANDSAT-8 Oli and Modis Sensors Based on Radiation and Geometry Matching

    Science.gov (United States)

    Li, J.; Wu, Z.; Wei, X.; Zhang, Y.; Feng, F.; Guo, F.

    2018-04-01

    Cross-calibration has the advantages of high precision, low resource requirements and simple implementation. It has been widely used in recent years. The four wide-field-of-view (WFV) cameras on-board Gaofen-1 satellite provide high spatial resolution and wide combined coverage (4 × 200 km) without onboard calibration. In this paper, the four-band radiometric cross-calibration coefficients of WFV1 camera were obtained based on radiation and geometry matching taking Landsat 8 OLI (Operational Land Imager) sensor as reference. Scale Invariant Feature Transform (SIFT) feature detection method and distance and included angle weighting method were introduced to correct misregistration of WFV-OLI image pair. The radiative transfer model was used to eliminate difference between OLI sensor and WFV1 camera through the spectral match factor (SMF). The near-infrared band of WFV1 camera encompasses water vapor absorption bands, thus a Look Up Table (LUT) for SMF varies from water vapor amount is established to estimate the water vapor effects. The surface synchronization experiment was designed to verify the reliability of the cross-calibration coefficients, which seem to perform better than the official coefficients claimed by the China Centre for Resources Satellite Data and Application (CCRSDA).

  9. Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios

    International Nuclear Information System (INIS)

    Lee, Inho; Oh, Jaesung; Oh, Jun-Ho; Kim, Inhyeok

    2017-01-01

    This research aims to develop a vision sensor system and a recognition algorithm to enable a humanoid to operate autonomously in a disaster environment. In disaster response scenarios, humanoid robots that perform manipulation and locomotion tasks must identify the objects in the environment from those challenged by the call by the United States’ Defense Advanced Research Projects Agency, e.g., doors, valves, drills, debris, uneven terrains, and stairs, among others. In order for a humanoid to undertake a number of tasks, we con- struct a camera–laser fusion system and develop an environmental recognition algorithm. Laser distance sensor and motor are used to obtain 3D cloud data. We project the 3D cloud data onto a 2D image according to the intrinsic parameters of the camera and the distortion model of the lens. In this manner, our fusion sensor system performs functions such as those performed by the RGB-D sensor gener- ally used in segmentation research. Our recognition algorithm is based on super-pixel segmentation and random sampling. The proposed approach clusters the unorganized cloud data according to geometric characteristics, namely, proximity and co-planarity. To assess the feasibility of our system and algorithm, we utilize the humanoid robot, DRC-HUBO, and the results are demonstrated in the accompanying video.

  10. Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Inho [Institute for Human and Machine Cognition (IHMC), Florida (United States); Oh, Jaesung; Oh, Jun-Ho [Korea Advanced Institute of Science and Technology (KAIST), Daejeon (Korea, Republic of); Kim, Inhyeok [NAVER Green Factory, Seongnam (Korea, Republic of)

    2017-06-15

    This research aims to develop a vision sensor system and a recognition algorithm to enable a humanoid to operate autonomously in a disaster environment. In disaster response scenarios, humanoid robots that perform manipulation and locomotion tasks must identify the objects in the environment from those challenged by the call by the United States’ Defense Advanced Research Projects Agency, e.g., doors, valves, drills, debris, uneven terrains, and stairs, among others. In order for a humanoid to undertake a number of tasks, we con- struct a camera–laser fusion system and develop an environmental recognition algorithm. Laser distance sensor and motor are used to obtain 3D cloud data. We project the 3D cloud data onto a 2D image according to the intrinsic parameters of the camera and the distortion model of the lens. In this manner, our fusion sensor system performs functions such as those performed by the RGB-D sensor gener- ally used in segmentation research. Our recognition algorithm is based on super-pixel segmentation and random sampling. The proposed approach clusters the unorganized cloud data according to geometric characteristics, namely, proximity and co-planarity. To assess the feasibility of our system and algorithm, we utilize the humanoid robot, DRC-HUBO, and the results are demonstrated in the accompanying video.

  11. The zone of comfort: Predicting visual discomfort with stereo displays

    Science.gov (United States)

    Shibata, Takashi; Kim, Joohwan; Hoffman, David M.; Banks, Martin S.

    2012-01-01

    Recent increased usage of stereo displays has been accompanied by public concern about potential adverse effects associated with prolonged viewing of stereo imagery. There are numerous potential sources of adverse effects, but we focused on how vergence–accommodation conflicts in stereo displays affect visual discomfort and fatigue. In one experiment, we examined the effect of viewing distance on discomfort and fatigue. We found that conflicts of a given dioptric value were slightly less comfortable at far than at near distance. In a second experiment, we examined the effect of the sign of the vergence–accommodation conflict on discomfort and fatigue. We found that negative conflicts (stereo content behind the screen) are less comfortable at far distances and that positive conflicts (content in front of screen) are less comfortable at near distances. In a third experiment, we measured phoria and the zone of clear single binocular vision, which are clinical measurements commonly associated with correcting refractive error. Those measurements predicted susceptibility to discomfort in the first two experiments. We discuss the relevance of these findings for a wide variety of situations including the viewing of mobile devices, desktop displays, television, and cinema. PMID:21778252

  12. Quantitative Evaluation of Stereo Visual Odometry for Autonomous Vessel Localisation in Inland Waterway Sensing Applications

    Directory of Open Access Journals (Sweden)

    Thomas Kriechbaumer

    2015-12-01

    Full Text Available Autonomous survey vessels can increase the efficiency and availability of wide-area river environment surveying as a tool for environment protection and conservation. A key challenge is the accurate localisation of the vessel, where bank-side vegetation or urban settlement preclude the conventional use of line-of-sight global navigation satellite systems (GNSS. In this paper, we evaluate unaided visual odometry, via an on-board stereo camera rig attached to the survey vessel, as a novel, low-cost localisation strategy. Feature-based and appearance-based visual odometry algorithms are implemented on a six degrees of freedom platform operating under guided motion, but stochastic variation in yaw, pitch and roll. Evaluation is based on a 663 m-long trajectory (>15,000 image frames and statistical error analysis against ground truth position from a target tracking tachymeter integrating electronic distance and angular measurements. The position error of the feature-based technique (mean of ±0.067 m is three times smaller than that of the appearance-based algorithm. From multi-variable statistical regression, we are able to attribute this error to the depth of tracked features from the camera in the scene and variations in platform yaw. Our findings inform effective strategies to enhance stereo visual localisation for the specific application of river monitoring.

  13. Low-cost far infrared bolometer camera for automotive use

    Science.gov (United States)

    Vieider, Christian; Wissmar, Stanley; Ericsson, Per; Halldin, Urban; Niklaus, Frank; Stemme, Göran; Källhammer, Jan-Erik; Pettersson, Håkan; Eriksson, Dick; Jakobsen, Henrik; Kvisterøy, Terje; Franks, John; VanNylen, Jan; Vercammen, Hans; VanHulsel, Annick

    2007-04-01

    A new low-cost long-wavelength infrared bolometer camera system is under development. It is designed for use with an automatic vision algorithm system as a sensor to detect vulnerable road users in traffic. Looking 15 m in front of the vehicle it can in case of an unavoidable impact activate a brake assist system or other deployable protection system. To achieve our cost target below €100 for the sensor system we evaluate the required performance and can reduce the sensitivity to 150 mK and pixel resolution to 80 x 30. We address all the main cost drivers as sensor size and production yield along with vacuum packaging, optical components and large volume manufacturing technologies. The detector array is based on a new type of high performance thermistor material. Very thin Si/SiGe single crystal multi-layers are grown epitaxially. Due to the resulting valence barriers a high temperature coefficient of resistance is achieved (3.3%/K). Simultaneously, the high quality crystalline material provides very low 1/f-noise characteristics and uniform material properties. The thermistor material is transferred from the original substrate wafer to the read-out circuit using adhesive wafer bonding and subsequent thinning. Bolometer arrays can then be fabricated using industry standard MEMS process and materials. The inherently good detector performance allows us to reduce the vacuum requirement and we can implement wafer level vacuum packaging technology used in established automotive sensor fabrication. The optical design is reduced to a single lens camera. We develop a low cost molding process using a novel chalcogenide glass (GASIR®3) and integrate anti-reflective and anti-erosion properties using diamond like carbon coating.

  14. CeB6 Sensor for Thermoelectric Single-Photon Detector

    Directory of Open Access Journals (Sweden)

    Armen KUZANIAN

    2015-08-01

    Full Text Available Interest in single-photon detectors has recently sharply increased. The most developed single-photon detectors are currently based on superconductors. Following the theory, thermoelectric single-photon detectors can compete with superconducting detectors. The operational principle of thermoelectric detector is based on photon absorption by absorber as a result of which a temperature gradient is generated across the sensor. In this work we present the results of computer modeling of heat distribution processes after absorption of a photon of 1 keV - 1 eV energy in different areas of the absorber for different geometries of tungsten absorber and cerium hexaboride sensor. The time dependence of the temperature difference between the ends of the thermoelectric sensor and electric potential appearing across the sensor are calculated. The results of calculations show that it is realistic to detect single photons from IR to X-ray and determine their energy. Count rates up to hundreds gigahertz can be achieved.

  15. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    Science.gov (United States)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  16. A SPATIO-SPECTRAL CAMERA FOR HIGH RESOLUTION HYPERSPECTRAL IMAGING

    Directory of Open Access Journals (Sweden)

    S. Livens

    2017-08-01

    Full Text Available Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600–900 nm in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots, horticulture (crop status monitoring to evaluate irrigation management in strawberry fields and geology (meteorite detection on a grassland field. Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475–925 nm, and we discuss future work.

  17. A Vision-Based Sensor for Noncontact Structural Displacement Measurement

    Science.gov (United States)

    Feng, Dongming; Feng, Maria Q.; Ozer, Ekin; Fukuda, Yoshio

    2015-01-01

    Conventional displacement sensors have limitations in practical applications. This paper develops a vision sensor system for remote measurement of structural displacements. An advanced template matching algorithm, referred to as the upsampled cross correlation, is adopted and further developed into a software package for real-time displacement extraction from video images. By simply adjusting the upsampling factor, better subpixel resolution can be easily achieved to improve the measurement accuracy. The performance of the vision sensor is first evaluated through a laboratory shaking table test of a frame structure, in which the displacements at all the floors are measured by using one camera to track either high-contrast artificial targets or low-contrast natural targets on the structural surface such as bolts and nuts. Satisfactory agreements are observed between the displacements measured by the single camera and those measured by high-performance laser displacement sensors. Then field tests are carried out on a railway bridge and a pedestrian bridge, through which the accuracy of the vision sensor in both time and frequency domains is further confirmed in realistic field environments. Significant advantages of the noncontact vision sensor include its low cost, ease of operation, and flexibility to extract structural displacement at any point from a single measurement. PMID:26184197

  18. Precision 3d Surface Reconstruction from Lro Nac Images Using Semi-Global Matching with Coupled Epipolar Rectification

    Science.gov (United States)

    Hu, H.; Wu, B.

    2017-07-01

    The Narrow-Angle Camera (NAC) on board the Lunar Reconnaissance Orbiter (LRO) comprises of a pair of closely attached high-resolution push-broom sensors, in order to improve the swath coverage. However, the two image sensors do not share the same lenses and cannot be modelled geometrically using a single physical model. Thus, previous works on dense matching of stereo pairs of NAC images would generally create two to four stereo models, each with an irregular and overlapping region of varying size. Semi-Global Matching (SGM) is a well-known dense matching method and has been widely used for image-based 3D surface reconstruction. SGM is a global matching algorithm relying on global inference in a larger context rather than individual pixels to establish stable correspondences. The stereo configuration of LRO NAC images causes severe problem for image matching methods such as SGM, which emphasizes global matching strategy. Aiming at using SGM for image matching of LRO NAC stereo pairs for precision 3D surface reconstruction, this paper presents a coupled epipolar rectification methods for LRO NAC stereo images, which merges the image pair in the disparity space and in this way, only one stereo model will be estimated. For a stereo pair (four) of NAC images, the method starts with the boresight calibration by finding correspondence in the small overlapping stripe between each pair of NAC images and bundle adjustment of the stereo pair, in order to clean the vertical disparities. Then, the dominate direction of the images are estimated by project the center of the coverage area to the reference image and back-projected to the bounding box plane determined by the image orientation parameters iteratively. The dominate direction will determine an affine model, by which the pair of NAC images are warped onto the object space with a given ground resolution and in the meantime, a mask is produced indicating the owner of each pixel. SGM is then used to generate a disparity

  19. PRECISION 3D SURFACE RECONSTRUCTION FROM LRO NAC IMAGES USING SEMI-GLOBAL MATCHING WITH COUPLED EPIPOLAR RECTIFICATION

    Directory of Open Access Journals (Sweden)

    H. Hu

    2017-07-01

    Full Text Available The Narrow-Angle Camera (NAC on board the Lunar Reconnaissance Orbiter (LRO comprises of a pair of closely attached high-resolution push-broom sensors, in order to improve the swath coverage. However, the two image sensors do not share the same lenses and cannot be modelled geometrically using a single physical model. Thus, previous works on dense matching of stereo pairs of NAC images would generally create two to four stereo models, each with an irregular and overlapping region of varying size. Semi-Global Matching (SGM is a well-known dense matching method and has been widely used for image-based 3D surface reconstruction. SGM is a global matching algorithm relying on global inference in a larger context rather than individual pixels to establish stable correspondences. The stereo configuration of LRO NAC images causes severe problem for image matching methods such as SGM, which emphasizes global matching strategy. Aiming at using SGM for image matching of LRO NAC stereo pairs for precision 3D surface reconstruction, this paper presents a coupled epipolar rectification methods for LRO NAC stereo images, which merges the image pair in the disparity space and in this way, only one stereo model will be estimated. For a stereo pair (four of NAC images, the method starts with the boresight calibration by finding correspondence in the small overlapping stripe between each pair of NAC images and bundle adjustment of the stereo pair, in order to clean the vertical disparities. Then, the dominate direction of the images are estimated by project the center of the coverage area to the reference image and back-projected to the bounding box plane determined by the image orientation parameters iteratively. The dominate direction will determine an affine model, by which the pair of NAC images are warped onto the object space with a given ground resolution and in the meantime, a mask is produced indicating the owner of each pixel. SGM is then used to

  20. Spectral Characterization of a Prototype SFA Camera for Joint Visible and NIR Acquisition

    Directory of Open Access Journals (Sweden)

    Jean-Baptiste Thomas

    2016-06-01

    Full Text Available Multispectral acquisition improves machine vision since it permits capturing more information on object surface properties than color imaging. The concept of spectral filter arrays has been developed recently and allows multispectral single shot acquisition with a compact camera design. Due to filter manufacturing difficulties, there was, up to recently, no system available for a large span of spectrum, i.e., visible and Near Infra-Red acquisition. This article presents the achievement of a prototype of camera that captures seven visible and one near infra-red bands on the same sensor chip. A calibration is proposed to characterize the sensor, and images are captured. Data are provided as supplementary material for further analysis and simulations. This opens a new range of applications in security, robotics, automotive and medical fields.

  1. Cultural heritage omni-stereo panoramas for immersive cultural analytics - From the Nile to the Hijaz

    KAUST Repository

    Smith, Neil; Cutchin, Steven; Kooima, Robert L.; Ainsworth, Richard A.; Sandin, Daniel J.; Schulze, Jü rgen P.; Prudhomme, Andrew; Kuester, Falko; Levy, Thomas E.; Defanti, Thomas A.

    2013-01-01

    The digital imaging acquisition and visualization techniques described here provides a hyper-realistic stereoscopic spherical capture of cultural heritage sites. An automated dual-camera system is used to capture sufficient stereo digital images to cover a sphere or cylinder. The resulting stereo images are projected undistorted in VR systems providing an immersive virtual environment in which researchers can collaboratively study the important textural details of an excavation or historical site. This imaging technique complements existing technologies such as LiDAR or SfM providing more detailed textural information that can be used in conjunction for analysis and visualization. The advantages of this digital imaging technique for cultural heritage can be seen in its non-invasive and rapid capture of heritage sites for documentation, analysis, and immersive visualization. The technique is applied to several significant heritage sites in Luxor, Egypt and Saudi Arabia.

  2. Cultural heritage omni-stereo panoramas for immersive cultural analytics - From the Nile to the Hijaz

    KAUST Repository

    Smith, Neil

    2013-09-01

    The digital imaging acquisition and visualization techniques described here provides a hyper-realistic stereoscopic spherical capture of cultural heritage sites. An automated dual-camera system is used to capture sufficient stereo digital images to cover a sphere or cylinder. The resulting stereo images are projected undistorted in VR systems providing an immersive virtual environment in which researchers can collaboratively study the important textural details of an excavation or historical site. This imaging technique complements existing technologies such as LiDAR or SfM providing more detailed textural information that can be used in conjunction for analysis and visualization. The advantages of this digital imaging technique for cultural heritage can be seen in its non-invasive and rapid capture of heritage sites for documentation, analysis, and immersive visualization. The technique is applied to several significant heritage sites in Luxor, Egypt and Saudi Arabia.

  3. Advanced staring Si PIN visible sensor chip assembly for Bepi-Colombo mission to Mercury

    Science.gov (United States)

    Mills, R. E.; Drab, J. J.; Gin, A.

    2009-08-01

    The planet Mercury, by its near proximity to the sun, has always posed a formidable challenge to spacecraft. The Bepi-Colombo mission, coordinated by the European Space Agency, will be a pioneering effort in the investigation of this planet. Raytheon Vision Systems (RVS) has been given the opportunity to develop the radiation hardened, high operability, high SNR, advanced staring focal plane array (FPA) for the spacecraft destined (Fig. 1) to explore the planet Mercury. This mission will launch in 2013 on a journey lasting approximately 6 years. When it arrives at Mercury in August 2019, it will endure temperatures as high as 350°C as well as relatively high radiation environments during its 1 year data collection period from September 2019 until September 2020. To support this challenging goal, RVS has designed and produced a custom visible sensor based on a 2048 x 2048 (2k2) format with a 10 μm unit cell. This sensor will support both the High Resolution Imaging Camera (HRIC) and the Stereo Camera (STC) instruments. This dual purpose sensor was designed to achieve high sensitivity as well as low input noise (<100 e-) for space-based, low light conditions. It also must maintain performance parameters in a total ionizing dose environment up to 70 kRad (Si) as well as immunity to latch-up and singe event upset. This paper will show full sensor chip assembly data highlighting the performance parameters prior to irradiation. Radiation testing performance will be reported by an independent source in a subsequent paper.

  4. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  5. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  6. Noninvasive, three-dimensional full-field body sensor for surface deformation monitoring of human body in vivo.

    Science.gov (United States)

    Chen, Zhenning; Shao, Xinxing; He, Xiaoyuan; Wu, Jialin; Xu, Xiangyang; Zhang, Jinlin

    2017-09-01

    Noninvasive, three-dimensional (3-D), full-field surface deformation measurements of the human body are important for biomedical investigations. We proposed a 3-D noninvasive, full-field body sensor based on stereo digital image correlation (stereo-DIC) for surface deformation monitoring of the human body in vivo. First, by applying an improved water-transfer printing (WTP) technique to transfer optimized speckle patterns onto the skin, the body sensor was conveniently and harmlessly fabricated directly onto the human body. Then, stereo-DIC was used to achieve 3-D noncontact and noninvasive surface deformation measurements. The accuracy and efficiency of the proposed body sensor were verified and discussed by considering different complexions. Moreover, the fabrication of speckle patterns on human skin, which has always been considered a challenging problem, was shown to be feasible, effective, and harmless as a result of the improved WTP technique. An application of the proposed stereo-DIC-based body sensor was demonstrated by measuring the pulse wave velocity of human carotid artery. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  7. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  8. Calibration of high resolution digital camera based on different photogrammetric methods

    International Nuclear Information System (INIS)

    Hamid, N F A; Ahmad, A

    2014-01-01

    This paper presents method of calibrating high-resolution digital camera based on different configuration which comprised of stereo and convergent. Both methods are performed in the laboratory and in the field calibration. Laboratory calibration is based on a 3D test field where a calibration plate of dimension 0.4 m × 0.4 m with grid of targets at different height is used. For field calibration, it uses the same concept of 3D test field which comprised of 81 target points located on a flat ground and the dimension is 9 m × 9 m. In this study, a non-metric high resolution digital camera called Canon Power Shot SX230 HS was calibrated in the laboratory and in the field using different configuration for data acquisition. The aim of the calibration is to investigate the behavior of the internal digital camera whether all the digital camera parameters such as focal length, principal point and other parameters remain the same or vice-versa. In the laboratory, a scale bar is placed in the test field for scaling the image and approximate coordinates were used for calibration process. Similar method is utilized in the field calibration. For both test fields, the digital images were acquired within short period using stereo and convergent configuration. For field calibration, aerial digital images were acquired using unmanned aerial vehicle (UAV) system. All the images were processed using photogrammetric calibration software. Different calibration results were obtained for both laboratory and field calibrations. The accuracy of the results is evaluated based on standard deviation. In general, for photogrammetric applications and other applications the digital camera must be calibrated for obtaining accurate measurement or results. The best method of calibration depends on the type of applications. Finally, for most applications the digital camera is calibrated on site, hence, field calibration is the best method of calibration and could be employed for obtaining accurate

  9. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    Science.gov (United States)

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  10. Decision Support System to Choose Digital Single Lens Camera with Simple Additive Weighting Method

    Directory of Open Access Journals (Sweden)

    Tri Pina Putri

    2016-11-01

    Full Text Available One of the technologies that evolve today is Digital Single Lens Reflex (DSLR camera. The number of products makes users have difficulties to choose the appropriate camera based on their criteria. Users may utilize several ways to help them choosing the intended camera such as using magazine, internet, and other media. This paper discusses about a web based decision support system to choose cameras by using SAW (Simple Additive Weighting method in order to make the decision process more effective and efficient. This system is expected to give recommendations about the camera which is appropriate with the user’s need and criteria based on the cost, the resolution, the feature, the ISO, and the censor. The system was implemented by using PHP and MySQL. Based on the result of questionnaire distributed to 20 respondents, 60% respondents agree that this decision support system can help users to choose the appropriate camera DSLR in accordance with the user’s need, 60% of respondents agree that this decision support system is more effective to choose DSLR camera and 75% of respondents agree that this system is more efficient. In addition, 60.55% of respondents agree that this system has met 5 Es Usability Framework.

  11. Observation of atomic arrangement by using photoelectron holography and atomic stereo-photograph

    International Nuclear Information System (INIS)

    Matsushita, Tomohiro; Guo, Fang Zhun; Agui, Akane; Matsui, Fumihiko; Daimon, Hiroshi

    2006-01-01

    Both a photoelectron holography and atomic stereo-photograph are the atomic structure analysis methods on the basis of photoelectron diffraction. They have six special features such as 1) direct determination of atomic structure, 2) measurement of three dimensional atomic arrangements surrounding of specific element in the sample, 3) determination of position of atom in spite of electron cloud, 4) unnecessary of perfect periodic structure, 5) good sensitivity of structure in the neighborhood of surface and 6) information of electron structure. Photoelectron diffraction, the principle and measurement system of photoelectron holography and atomic stereo-photograph is explained. As application examples of atomic stereo-photograph, the single crystal of cupper and graphite are indicated. For examples of photoelectron holography, Si(001)2p and Ge(001)3s are explained. (S.Y.)

  12. Design of an experimental four-camera setup for enhanced 3D surface reconstruction in microsurgery

    Directory of Open Access Journals (Sweden)

    Marzi Christian

    2017-09-01

    Full Text Available Future fully digital surgical visualization systems enable a wide range of new options. Caused by optomechanical limitations a main disadvantage of today’s surgical microscopes is their incapability of providing arbitrary perspectives to more than two observers. In a fully digital microscopic system, multiple arbitrary views can be generated from a 3D reconstruction. Modern surgical microscopes allow replacing the eyepieces by cameras in order to record stereoscopic videos. A reconstruction from these videos can only contain the amount of detail the recording camera system gathers from the scene. Therefore, covered surfaces can result in a faulty reconstruction for deviating stereoscopic perspectives. By adding cameras recording the object from different angles, additional information of the scene is acquired, allowing to improve the reconstruction. Our approach is to use a fixed four-camera setup as a front-end system to capture enhanced 3D topography of a pseudo-surgical scene. This experimental setup would provide images for the reconstruction algorithms and generation of multiple observing stereo perspectives. The concept of the designed setup is based on the common main objective (CMO principle of current surgical microscopes. These systems are well established and optically mature. Furthermore, the CMO principle allows a more compact design and a lowered effort in calibration than cameras with separate optics. Behind the CMO four pupils separate the four channels which are recorded by one camera each. The designed system captures an area of approximately 28mm × 28mm with four cameras. Thus, allowing to process images of 6 different stereo perspectives. In order to verify the setup, it is modelled in silico. It can be used in further studies to test algorithms for 3D reconstruction from up to four perspectives and provide information about the impact of additionally recorded perspectives on the enhancement of a reconstruction.

  13. Retinal fundus imaging with a plenoptic sensor

    Science.gov (United States)

    Thurin, Brice; Bloch, Edward; Nousias, Sotiris; Ourselin, Sebastien; Keane, Pearse; Bergeles, Christos

    2018-02-01

    Vitreoretinal surgery is moving towards 3D visualization of the surgical field. This require acquisition system capable of recording such 3D information. We propose a proof of concept imaging system based on a light-field camera where an array of micro-lenses is placed in front of a conventional sensor. With a single snapshot, a stack of images focused at different depth are produced on the fly, which provides enhanced depth perception for the surgeon. Difficulty in depth localization of features and frequent focus-change during surgery are making current vitreoretinal heads-up surgical imaging systems cumbersome to use. To improve the depth perception and eliminate the need to manually refocus on the instruments during the surgery, we designed and implemented a proof-of-concept ophthalmoscope equipped with a commercial light-field camera. The sensor of our camera is composed of an array of micro-lenses which are projecting an array of overlapped micro-images. We show that with a single light-field snapshot we can digitally refocus between the retina and a tool located in front of the retina or display an extended depth-of-field image where everything is in focus. The design and system performances of the plenoptic fundus camera are detailed. We will conclude by showing in vivo data recorded with our device.

  14. NV-CMOS HD camera for day/night imaging

    Science.gov (United States)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  15. Stereo-Optic High Definition Imaging: A New Technology to Understand Bird and Bat Avoidance of Wind Turbines

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Evan; Goodale, Wing; Burns, Steve; Dorr, Chirs; Duron, Melissa; Gilbert, Andrew; Moratz, Reinhard; Robinson, Mark

    2017-07-21

    There is a critical need to develop monitoring tools to track aerofauna (birds and bats) in three dimensions around wind turbines. New monitoring systems will reduce permitting uncertainty by increasing the understanding of how birds and bats are interacting with wind turbines, which will improve the accuracy of impact predictions. Biodiversity Research Institute (BRI), The University of Maine Orono School of Computing and Information Science (UMaine SCIS), HiDef Aerial Surveying Limited (HiDef), and SunEdison, Inc. (formerly First Wind) responded to this need by using stereo-optic cameras with near-infrared (nIR) technology to investigate new methods for documenting aerofauna behavior around wind turbines. The stereo-optic camera system used two synchronized high-definition video cameras with fisheye lenses and processing software that detected moving objects, which could be identified in post-processing. The stereo- optic imaging system offered the ability to extract 3-D position information from pairs of images captured from different viewpoints. Fisheye lenses allowed for a greater field of view, but required more complex image rectification to contend with fisheye distortion. The ability to obtain 3-D positions provided crucial data on the trajectory (speed and direction) of a target, which, when the technology is fully developed, will provide data on how animals are responding to and interacting with wind turbines. This project was focused on testing the performance of the camera system, improving video review processing time, advancing the 3-D tracking technology, and moving the system from Technology Readiness Level 4 to 5. To achieve these objectives, we determined the size and distance at which aerofauna (particularly eagles) could be detected and identified, created efficient data management systems, improved the video post-processing viewer, and attempted refinement of 3-D modeling with respect to fisheye lenses. The 29-megapixel camera system

  16. Single-analyte to multianalyte fluorescence sensors

    Science.gov (United States)

    Lavigne, John J.; Metzger, Axel; Niikura, Kenichi; Cabell, Larry A.; Savoy, Steven M.; Yoo, J. S.; McDevitt, John T.; Neikirk, Dean P.; Shear, Jason B.; Anslyn, Eric V.

    1999-05-01

    The rational design of small molecules for the selective complexation of analytes has reached a level of sophistication such that there exists a high degree of prediction. An effective strategy for transforming these hosts into sensors involves covalently attaching a fluorophore to the receptor which displays some fluorescence modulation when analyte is bound. Competition methods, such as those used with antibodies, are also amenable to these synthetic receptors, yet there are few examples. In our laboratories, the use of common dyes in competition assays with small molecules has proven very effective. For example, an assay for citrate in beverages and an assay for the secondary messenger IP3 in cells have been developed. Another approach we have explored focuses on multi-analyte sensor arrays with attempt to mimic the mammalian sense of taste. Our system utilizes polymer resin beads with the desired sensors covalently attached. These functionalized microspheres are then immobilized into micromachined wells on a silicon chip thereby creating our taste buds. Exposure of the resin to analyte causes a change in the transmittance of the bead. This change can be fluorescent or colorimetric. Optical interrogation of the microspheres, by illuminating from one side of the wafer and collecting the signal on the other, results in an image. These data streams are collected using a CCD camera which creates red, green and blue (RGB) patterns that are distinct and reproducible for their environments. Analysis of this data can identify and quantify the analytes present.

  17. Topomapping of Mars with HRSC images, ISIS, and a commercial stereo workstation

    Science.gov (United States)

    Kirk, R. L.; Howington-Kraus, E.; Galuszka, D.; Redding, B.; Hare, T. M.

    HRSC on Mars Express [1] is the first camera designed specifically for stereo imaging to be used in mapping a planet other than the Earth. Nine detectors view the planet through a single lens to obtain four-band color coverage and stereo images at 3 to 5 distinct angles in a single pass over the target. The short interval between acquisition of the images ensures that changes that could interfere with stereo matching are minimized. The resolution of the nadir channel is 12.5 m at periapsis, poorer at higher points in the elliptical orbit. The stereo channels are typically operated at 2x coarser resolution and the color channels at 4x or 8x. Since the commencement of operations in January 2004, approximately 58% of Mars has been imaged at nadir resolutions better than 50 m/pixel. This coverage is expected to increase significantly during the recently approved extended mission of Mars Express, giving the HRSC dataset enormous potential for regional and even global mapping. Systematic processing of the HRSC images is carried out at the German Aerospace Center (DLR) in Berlin. Preliminary digital topographic models (DTMs) at 200 m/post resolution and orthorectified image products are produced in near-realtime for all orbits, by using the VICAR software system [2]. The tradeoff of universal coverage but limited DTM resolution makes these products optimal for many but not all research studies. Experiments on adaptive processing with the same software, for a limited number of orbits, have allowed DTMs of higher resolution (down to 50 m/post) to be produced [3]. In addition, numerous Co-Investigators on the HRSC team (including ourselves) are actively researching techniques to improve on the standard products, by such methods as bundle adjustment, alternate approaches to stereo DTM generation, and refinement of DTMs by photoclinometry (shape-from-shading) [4]. The HRSC team is conducting a systematic comparison of these alternative processing approaches by arranging for

  18. Stereoselectivity in metallocene-catalyzed coordination polymerization of renewable methylene butyrolactones: From stereo-random to stereo-perfect polymers

    KAUST Repository

    Chen, Xia; Caporaso, Lucia; Cavallo, Luigi; Chen, Eugene You Xian

    2012-01-01

    Coordination polymerization of renewable α-methylene-γ-(methyl) butyrolactones by chiral C 2-symmetric zirconocene catalysts produces stereo-random, highly stereo-regular, or perfectly stereo-regular polymers, depending on the monomer and catalyst structures. Computational studies yield a fundamental understanding of the stereocontrol mechanism governing these new polymerization reactions mediated by chiral metallocenium catalysts. © 2012 American Chemical Society.

  19. Stereoselectivity in metallocene-catalyzed coordination polymerization of renewable methylene butyrolactones: From stereo-random to stereo-perfect polymers

    KAUST Repository

    Chen, Xia

    2012-05-02

    Coordination polymerization of renewable α-methylene-γ-(methyl) butyrolactones by chiral C 2-symmetric zirconocene catalysts produces stereo-random, highly stereo-regular, or perfectly stereo-regular polymers, depending on the monomer and catalyst structures. Computational studies yield a fundamental understanding of the stereocontrol mechanism governing these new polymerization reactions mediated by chiral metallocenium catalysts. © 2012 American Chemical Society.

  20. Multiple-Event, Single-Photon Counting Imaging Sensor

    Science.gov (United States)

    Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.

    2011-01-01

    The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.

  1. Computer vision camera with embedded FPGA processing

    Science.gov (United States)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  2. 3D Reconstruction and Restoration Monitoring of Sculptural Artworks by a Multi-Sensor Framework

    Directory of Open Access Journals (Sweden)

    Sandro Barone

    2012-12-01

    Full Text Available Nowadays, optical sensors are used to digitize sculptural artworks by exploiting various contactless technologies. Cultural Heritage applications may concern 3D reconstructions of sculptural shapes distinguished by small details distributed over large surfaces. These applications require robust multi-view procedures based on aligning several high resolution 3D measurements. In this paper, the integration of a 3D structured light scanner and a stereo photogrammetric sensor is proposed with the aim of reliably reconstructing large free form artworks. The structured light scanner provides high resolution range maps captured from different views. The stereo photogrammetric sensor measures the spatial location of each view by tracking a marker frame integral to the optical scanner. This procedure allows the computation of the rotation-translation matrix to transpose the range maps from local view coordinate systems to a unique global reference system defined by the stereo photogrammetric sensor. The artwork reconstructions can be further augmented by referring metadata related to restoration processes. In this paper, a methodology has been developed to map metadata to 3D models by capturing spatial references using a passive stereo-photogrammetric sensor. The multi-sensor framework has been experienced through the 3D reconstruction of a Statue of Hope located at the English Cemetery in Florence. This sculptural artwork has been a severe test due to the non-cooperative environment and the complex shape features distributed over a large surface.

  3. Stereo particle image velocimetry set up for measurements in the wake of scaled wind turbines

    Science.gov (United States)

    Campanardi, Gabriele; Grassi, Donato; Zanotti, Alex; Nanos, Emmanouil M.; Campagnolo, Filippo; Croce, Alessandro; Bottasso, Carlo L.

    2017-08-01

    Stereo particle image velocimetry measurements were carried out in the boundary layer test section of Politecnico di Milano large wind tunnel to survey the wake of a scaled wind turbine model designed and developed by Technische Universität München. The stereo PIV instrumentation was set up to survey the three velocity components on cross-flow planes at different longitudinal locations. The area of investigation covered the entire extent of the wind turbines wake that was scanned by the use of two separate traversing systems for both the laser and the cameras. Such instrumentation set up enabled to gain rapidly high quality results suitable to characterise the behaviour of the flow field in the wake of the scaled wind turbine. This would be very useful for the evaluation of the performance of wind farm control methodologies based on wake redirection and for the validation of CFD tools.

  4. Determining the orientation of the observed object in threedimensional space using stereo vision methods

    International Nuclear Information System (INIS)

    Ponomarev, S

    2014-01-01

    The task of matching image of an object with its template is central for many optoelectronic systems. Solution of the matching problem in three-dimensional space in contrast to the structural alignment in the image plane allows using a larger amount of information about the object for determining its orientation, which may increase the probability of correct matching. In the case of stereo vision methods for constructing a three-dimensional image of the object, it becomes possible to achieve invariance w.r.t. background and distance to the observed object. Only three of the orientation angle of the object relative to the camera are uncertain and require measurements. This paper proposes a method for determining the orientation angles of the observed object in three-dimensional space, which is based on the processing of stereo image sequences. Disparity map segmentation method that allows one to ensure the invariance of the background is presented. Quantitative estimates of the effectiveness of the proposed method are presented and discussed.

  5. Visual Servoing of Mobile Microrobot with Centralized Camera

    Directory of Open Access Journals (Sweden)

    Kiswanto Gandjar

    2018-01-01

    Full Text Available In this paper, a mechanism of visual servoing for mobile microrobot with a centralized camera is developed. Especially for the development of swarm AI applications. In the fields of microrobots the size of robots is minimal and the amount of movement is also small. By replacing various sensors that is needed with a single centralized vision sensor we can eliminate a lot of components and the need for calibration on every robot. A study and design for a visual servoing mobile microrobot has been developed. This system can use multi object tracking and hough transform to identify the positions of the robots. And can control multiple robots at once with an accuracy of 5-6 pixel from the desired target.

  6. Stereo 3D spatial phase diagrams

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Jinwu, E-mail: kangjw@tsinghua.edu.cn; Liu, Baicheng, E-mail: liubc@tsinghua.edu.cn

    2016-07-15

    Phase diagrams serve as the fundamental guidance in materials science and engineering. Binary P-T-X (pressure–temperature–composition) and multi-component phase diagrams are of complex spatial geometry, which brings difficulty for understanding. The authors constructed 3D stereo binary P-T-X, typical ternary and some quaternary phase diagrams. A phase diagram construction algorithm based on the calculated phase reaction data in PandaT was developed. And the 3D stereo phase diagram of Al-Cu-Mg ternary system is presented. These phase diagrams can be illustrated by wireframe, surface, solid or their mixture, isotherms and isopleths can be generated. All of these can be displayed by the three typical display ways: electronic shutter, polarization and anaglyph (for example red-cyan glasses). Especially, they can be printed out with 3D stereo effect on paper, and watched by the aid of anaglyph glasses, which makes 3D stereo book of phase diagrams come to reality. Compared with the traditional illustration way, the front of phase diagrams protrude from the screen and the back stretches far behind of the screen under 3D stereo display, the spatial structure can be clearly and immediately perceived. These 3D stereo phase diagrams are useful in teaching and research. - Highlights: • Stereo 3D phase diagram database was constructed, including binary P-T-X, ternary, some quaternary and real ternary systems. • The phase diagrams can be watched by active shutter or polarized or anaglyph glasses. • The print phase diagrams retains 3D stereo effect which can be achieved by the aid of anaglyph glasses.

  7. Defocus Deblurring and Superresolution for Time-of-Flight Depth Cameras

    KAUST Repository

    Xiao, Lei

    2015-06-07

    Continuous-wave time-of-flight (ToF) cameras show great promise as low-cost depth image sensors in mobile applications. However, they also suffer from several challenges, including limited illumination intensity, which mandates the use of large numerical aperture lenses, and thus results in a shallow depth of field, making it difficult to capture scenes with large variations in depth. Another shortcoming is the limited spatial resolution of currently available ToF sensors. In this paper we analyze the image formation model for blurred ToF images. By directly working with raw sensor measurements but regularizing the recovered depth and amplitude images, we are able to simultaneously deblur and super-resolve the output of ToF cameras. Our method outperforms existing methods on both synthetic and real datasets. In the future our algorithm should extend easily to cameras that do not follow the cosine model of continuous-wave sensors, as well as to multi-frequency or multi-phase imaging employed in more recent ToF cameras.

  8. Defocus Deblurring and Superresolution for Time-of-Flight Depth Cameras

    KAUST Repository

    Xiao, Lei; Heide, Felix; O'Toole, Matthew; Kolb, Andreas; Hullin, Matthias B.; Kutulakos, Kyros; Heidrich, Wolfgang

    2015-01-01

    Continuous-wave time-of-flight (ToF) cameras show great promise as low-cost depth image sensors in mobile applications. However, they also suffer from several challenges, including limited illumination intensity, which mandates the use of large numerical aperture lenses, and thus results in a shallow depth of field, making it difficult to capture scenes with large variations in depth. Another shortcoming is the limited spatial resolution of currently available ToF sensors. In this paper we analyze the image formation model for blurred ToF images. By directly working with raw sensor measurements but regularizing the recovered depth and amplitude images, we are able to simultaneously deblur and super-resolve the output of ToF cameras. Our method outperforms existing methods on both synthetic and real datasets. In the future our algorithm should extend easily to cameras that do not follow the cosine model of continuous-wave sensors, as well as to multi-frequency or multi-phase imaging employed in more recent ToF cameras.

  9. A three-step vehicle detection framework for range estimation using a single camera

    CSIR Research Space (South Africa)

    Kanjee, R

    2015-12-01

    Full Text Available This paper proposes and validates a real-time onroad vehicle detection system, which uses a single camera for the purpose of intelligent driver assistance. A three-step vehicle detection framework is presented to detect and track the target vehicle...

  10. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    Science.gov (United States)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  11. Creation Greenhouse Environment Map Using Localization of Edge of Cultivation Platforms Based on Stereo Vision

    Directory of Open Access Journals (Sweden)

    A Nasiri

    2017-10-01

    Full Text Available Introduction Stereo vision means the capability of extracting the depth based on analysis of two images taken from different angles of one scene. The result of stereo vision is a collection of three-dimensional points which describes the details of scene proportional to the resolution of the obtained images. Vehicle automatic steering and crop growth monitoring are two important operations in agricultural precision. The essential aspects of an automated steering are position and orientation of the agricultural equipment in relation to crop row, detection of obstacles and design of path planning between the crop rows. The developed map can provide this information in the real time. Machine vision has the capabilities to perform these tasks in order to execute some operations such as cultivation, spraying and harvesting. In greenhouse environment, it is possible to develop a map and perform an automatic control by detecting and localizing the cultivation platforms as the main moving obstacle. The current work was performed to meet a method based on the stereo vision for detecting and localizing platforms, and then, providing a two-dimensional map for cultivation platforms in the greenhouse environment. Materials and Methods In this research, two webcams, made by Microsoft Corporation with the resolution of 960×544, are connected to the computer via USB2 in order to produce a stereo parallel camera. Due to the structure of cultivation platforms, the number of points in the point cloud will be decreased by extracting the only upper and lower edges of the platform. The proposed method in this work aims at extracting the edges based on depth discontinuous features in the region of platform edge. By getting the disparity image of the platform edges from the rectified stereo images and translating its data to 3D-space, the point cloud model of the environments is constructed. Then by projecting the points to XZ plane and putting local maps together

  12. RFID sensors as the common sensing platform for single-use biopharmaceutical manufacturing

    International Nuclear Information System (INIS)

    Potyrailo, Radislav A; Surman, Cheryl; Monk, David; Morris, William G; Wortley, Timothy; Vincent, Mark; Diana, Rafael; Pizzi, Vincent; Carter, Jeffrey; Gach, Gerard; Klensmeden, Staffan; Ehring, Hanno

    2011-01-01

    The lack of reliable single-use sensors prevents the biopharmaceutical industry from fully embracing single-use biomanufacturing processes. Sensors based on the same detection platform for all critical parameters in single-use bioprocess components would be highly desirable to significantly simplify their installation, calibration and operation. We review here our approach for passive radio-frequency identification (RFID)-based sensing that does not rely on costly proprietary RFID memory chips with an analog input but rather implements ubiquitous passive 13.56 MHz RFID tags as inductively coupled sensors with at least 16 bit resolution provided by a sensor reader. The developed RFID sensors combine several measured parameters from the resonant sensor antenna with multivariate data analysis and deliver unique capability of multiparameter sensing and rejection of environmental interferences with a single sensor. This general sensing approach provides an elegant solution for both analytical measurement and identification and documentation of the measured location. (topical review)

  13. Nuclear Radiation Degradation Study on HD Camera Based on CMOS Image Sensor at Different Dose Rates.

    Science.gov (United States)

    Wang, Congzheng; Hu, Song; Gao, Chunming; Feng, Chang

    2018-02-08

    In this work, we irradiated a high-definition (HD) industrial camera based on a commercial-off-the-shelf (COTS) CMOS image sensor (CIS) with Cobalt-60 gamma-rays. All components of the camera under test were fabricated without radiation hardening, except for the lens. The irradiation experiments of the HD camera under biased conditions were carried out at 1.0, 10.0, 20.0, 50.0 and 100.0 Gy/h. During the experiment, we found that the tested camera showed a remarkable degradation after irradiation and differed in the dose rates. With the increase of dose rate, the same target images become brighter. Under the same dose rate, the radiation effect in bright area is lower than that in dark area. Under different dose rates, the higher the dose rate is, the worse the radiation effect will be in both bright and dark areas. And the standard deviations of bright and dark areas become greater. Furthermore, through the progressive degradation analysis of the captured image, experimental results demonstrate that the attenuation of signal to noise ratio (SNR) versus radiation time is not obvious at the same dose rate, and the degradation is more and more serious with increasing dose rate. Additionally, the decrease rate of SNR at 20.0, 50.0 and 100.0 Gy/h is far greater than that at 1.0 and 10.0 Gy/h. Even so, we confirm that the HD industrial camera is still working at 10.0 Gy/h during the 8 h of measurements, with a moderate decrease of the SNR (5 dB). The work is valuable and can provide suggestion for camera users in the radiation field.

  14. Impact of New Camera Technologies on Discoveries in Cell Biology.

    Science.gov (United States)

    Stuurman, Nico; Vale, Ronald D

    2016-08-01

    New technologies can make previously invisible phenomena visible. Nowhere is this more obvious than in the field of light microscopy. Beginning with the observation of "animalcules" by Antonie van Leeuwenhoek, when he figured out how to achieve high magnification by shaping lenses, microscopy has advanced to this day by a continued march of discoveries driven by technical innovations. Recent advances in single-molecule-based technologies have achieved unprecedented resolution, and were the basis of the Nobel prize in Chemistry in 2014. In this article, we focus on developments in camera technologies and associated image processing that have been a major driver of technical innovations in light microscopy. We describe five types of developments in camera technology: video-based analog contrast enhancement, charge-coupled devices (CCDs), intensified sensors, electron multiplying gain, and scientific complementary metal-oxide-semiconductor cameras, which, together, have had major impacts in light microscopy. © 2016 Marine Biological Laboratory.

  15. Improved Stereo Matching With Boosting Method

    Directory of Open Access Journals (Sweden)

    Shiny B

    2015-06-01

    Full Text Available Abstract This paper presents an approach based on classification for improving the accuracy of stereo matching methods. We propose this method for occlusion handling. This work employs classification of pixels for finding the erroneous disparity values. Due to the wide applications of disparity map in 3D television medical imaging etc the accuracy of disparity map has high significance. An initial disparity map is obtained using local or global stereo matching methods from the input stereo image pair. The various features for classification are computed from the input stereo image pair and the obtained disparity map. Then the computed feature vector is used for classification of pixels by using GentleBoost as the classification method. The erroneous disparity values in the disparity map found by classification are corrected through a completion stage or filling stage. A performance evaluation of stereo matching using AdaBoostM1 RUSBoost Neural networks and GentleBoost is performed.

  16. Merged Shape from Shading and Shape from Stereo for Planetary Topographic Mapping

    Science.gov (United States)

    Tyler, Laurence; Cook, Tony; Barnes, Dave; Parr, Gerhard; Kirk, Randolph

    2014-05-01

    Digital Elevation Models (DEMs) of the Moon and Mars have traditionally been produced from stereo imagery from orbit, or from the surface landers or rovers. One core component of image-based DEM generation is stereo matching to find correspondences between images taken from different viewpoints. Stereo matchers that rely mostly on textural features in the images can fail to find enough matched points in areas lacking in contrast or surface texture. This can lead to blank or topographically noisy areas in resulting DEMs. Fine depth detail may also be lacking due to limited precision and quantisation of the pixel matching process. Shape from shading (SFS), a two dimensional version of photoclinometry, utilizes the properties of light reflecting off surfaces to build up localised slope maps, which can subsequently be combined to extract topography. This works especially well on homogeneous surfaces and can recover fine detail. However the cartographic accuracy can be affected by changes in brightness due to differences in surface material, albedo and light scattering properties, and also by the presence of shadows. We describe here experimental research for the Planetary Robotics Vision Data Exploitation EU FP7 project (PRoViDE) into using stereo generated depth maps in conjunction with SFS to recover both coarse and fine detail of planetary surface DEMs. Our Large Deformation Optimisation Shape From Shading (LDOSFS) algorithm uses image data, illumination, viewing geometry and camera parameters to produce a DEM. A stereo-derived depth map can be used as an initial seed if available. The software uses separate Bidirectional Reflectance Distribution Function (BRDF) and SFS modules for iterative processing and to make the code more portable for future development. Three BRDF models are currently implemented: Lambertian, Blinn-Phong, and Oren-Nayar. A version of the Hapke reflectance function, which is more appropriate for planetary surfaces, is under development

  17. Stereo-tomography in triangulated models

    Science.gov (United States)

    Yang, Kai; Shao, Wei-Dong; Xing, Feng-yuan; Xiong, Kai

    2018-04-01

    Stereo-tomography is a distinctive tomographic method. It is capable of estimating the scatterer position, the local dip of scatterer and the background velocity simultaneously. Building a geologically consistent velocity model is always appealing for applied and earthquake seismologists. Differing from the previous work to incorporate various regularization techniques into the cost function of stereo-tomography, we think extending stereo-tomography to the triangulated model will be the most straightforward way to achieve this goal. In this paper, we provided all the Fréchet derivatives of stereo-tomographic data components with respect to model components for slowness-squared triangulated model (or sloth model) in 2D Cartesian coordinate based on the ray perturbation theory for interfaces. A sloth model representation means a sparser model representation when compared with conventional B-spline model representation. A sparser model representation leads to a smaller scale of stereo-tomographic (Fréchet) matrix, a higher-accuracy solution when solving linear equations, a faster convergence rate and a lower requirement for quantity of data space. Moreover, a quantitative representation of interface strengthens the relationships among different model components, which makes the cross regularizations among these model components, such as node coordinates, scatterer coordinates and scattering angles, etc., more straightforward and easier to be implemented. The sensitivity analysis, the model resolution matrix analysis and a series of synthetic data examples demonstrate the correctness of the Fréchet derivatives, the applicability of the regularization terms and the robustness of the stereo-tomography in triangulated model. It provides a solid theoretical foundation for the real applications in the future.

  18. A single sensor and single actuator approach to performance tailoring over a prescribed frequency band.

    Science.gov (United States)

    Wang, Jiqiang

    2016-03-01

    Restricted sensing and actuation control represents an important area of research that has been overlooked in most of the design methodologies. In many practical control engineering problems, it is necessitated to implement the design through a single sensor and single actuator for multivariate performance variables. In this paper, a novel approach is proposed for the solution to the single sensor and single actuator control problem where performance over any prescribed frequency band can also be tailored. The results are obtained for the broad band control design based on the formulation for discrete frequency control. It is shown that the single sensor and single actuator control problem over a frequency band can be cast into a Nevanlinna-Pick interpolation problem. An optimal controller can then be obtained via the convex optimization over LMIs. Even remarkable is that robustness issues can also be tackled in this framework. A numerical example is provided for the broad band attenuation of rotor blade vibration to illustrate the proposed design procedures. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Interactive stereo electron microscopy enhanced with virtual reality

    International Nuclear Information System (INIS)

    Bethel, E.Wes; Bastacky, S.Jacob; Schwartz, Kenneth S.

    2001-01-01

    An analytical system is presented that is used to take measurements of objects perceived in stereo image pairs obtained from a scanning electron microscope (SEM). Our system operates by presenting a single stereo view that contains stereo image data obtained from the SEM, along with geometric representations of two types of virtual measurement instruments, a ''protractor'' and a ''caliper''. The measurements obtained from this system are an integral part of a medical study evaluating surfactant, a liquid coating the inner surface of the lung which makes possible the process of breathing. Measurements of the curvature and contact angle of submicron diameter droplets of a fluorocarbon deposited on the surface of airways are performed in order to determine surface tension of the air/liquid interface. This approach has been extended to a microscopic level from the techniques of traditional surface science by measuring submicrometer rather than millimeter diameter droplets, as well as the lengths and curvature of cilia responsible for movement of the surfactant, the airway's protective liquid blanket. An earlier implementation of this approach for taking angle measurements from objects perceived in stereo image pairs using a virtual protractor is extended in this paper to include distance measurements and to use a unified view model. The system is built around a unified view model that is derived from microscope-specific parameters, such as focal length, visible area and magnification. The unified view model ensures that the underlying view models and resultant binocular parallax cues are consistent between synthetic and acquired imagery. When the view models are consistent, it is possible to take measurements of features that are not constrained to lie within the projection plane. The system is first calibrated using non-clinical data of known size and resolution. Using the SEM, stereo image pairs of grids and spheres of known resolution are created to calibrate the

  20. Demonstration of the CDMA-mode CAOS smart camera.

    Science.gov (United States)

    Riza, Nabeel A; Mazhar, Mohsin A

    2017-12-11

    Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.

  1. APPLICATIONS OF ACTION CAM SENSORS IN THE ARCHAEOLOGICAL YARD

    Directory of Open Access Journals (Sweden)

    M. Pepe

    2018-05-01

    Full Text Available In recent years, special digital cameras called “action camera” or “action cam”, have become popular due to their low price, smallness, lightness, strength and capacity to make videos and photos even in extreme environment surrounding condition. Indeed, these particular cameras have been designed mainly to capture sport actions and work even in case of dirt, bumps, or underwater and at different external temperatures. High resolution of Digital single-lens reflex (DSLR cameras are usually preferred to be employed in photogrammetric field. Indeed, beyond the sensor resolution, the combination of such cameras with fixed lens with low distortion are preferred to perform accurate 3D measurements; at the contrary, action cameras have small and wide-angle lens, with a lower performance in terms of sensor resolution, lens quality and distortions. However, by considering the characteristics of the action cameras to acquire under conditions that may result difficult for standard DSLR cameras and because of their lower price, these could be taken into consideration as a possible and interesting approach during archaeological excavation activities to document the state of the places. In this paper, the influence of lens radial distortion and chromatic aberration on this type of cameras in self-calibration mode and an evaluation of their application in the field of Cultural Heritage will be investigated and discussed. Using a suitable technique, it has been possible to improve the accuracy of the 3D model obtained by action cam images. Case studies show the quality and the utility of the use of this type of sensor in the survey of archaeological artefacts.

  2. Robust stereo matching with trinary cross color census and triple image-based refinements

    Science.gov (United States)

    Chang, Ting-An; Lu, Xiao; Yang, Jar-Ferr

    2017-12-01

    For future 3D TV broadcasting systems and navigation applications, it is necessary to have accurate stereo matching which could precisely estimate depth map from two distanced cameras. In this paper, we first suggest a trinary cross color (TCC) census transform, which can help to achieve accurate disparity raw matching cost with low computational cost. The two-pass cost aggregation (TPCA) is formed to compute the aggregation cost, then the disparity map can be obtained by a range winner-take-all (RWTA) process and a white hole filling procedure. To further enhance the accuracy performance, a range left-right checking (RLRC) method is proposed to classify the results as correct, mismatched, or occluded pixels. Then, the image-based refinements for the mismatched and occluded pixels are proposed to refine the classified errors. Finally, the image-based cross voting and a median filter are employed to complete the fine depth estimation. Experimental results show that the proposed semi-global stereo matching system achieves considerably accurate disparity maps with reasonable computation cost.

  3. Development of a teaching system for an industrial robot using stereo vision

    Science.gov (United States)

    Ikezawa, Kazuya; Konishi, Yasuo; Ishigaki, Hiroyuki

    1997-12-01

    The teaching and playback method is mainly a teaching technique for industrial robots. However, this technique takes time and effort in order to teach. In this study, a new teaching algorithm using stereo vision based on human demonstrations in front of two cameras is proposed. In the proposed teaching algorithm, a robot is controlled repetitively according to angles determined by the fuzzy sets theory until it reaches an instructed teaching point, which is relayed through cameras by an operator. The angles are recorded and used later in playback. The major advantage of this algorithm is that no calibrations are needed. This is because the fuzzy sets theory, which is able to express qualitatively the control commands to the robot, is used instead of conventional kinematic equations. Thus, a simple and easy teaching operation is realized with this teaching algorithm. Simulations and experiments have been performed on the proposed teaching system, and data from testing has confirmed the usefulness of our design.

  4. Physical assessment of the GE/CGR Neurocam and comparison with a single rotating gamma-camera

    International Nuclear Information System (INIS)

    Kouris, K.; Jarritt, P.H.; Costa, D.C.; Ell, P.J.

    1992-01-01

    The GE/CGR Neurocam is a triple-headed single photon emission tomography (SPET) system dedicated to multi-slice brain tomography. We have assessed its physical performance in terms of sensitivity and resolution, and its clinical efficacy in comparison with a modern, single, rotating gamma-camera (GE 400XCT). Using a water-filled cylinder containing TC-99m, the tomographic volume sensitivity of the Neurocam was 30.0 and 50.7 kcps/MBq.ml.cm for the high-resolution and general-purpose collimators, respectively; the corresponding values for the single rotating camera were 7.6 and 12.8 kcps/MBq.ml.cm. Tomographic resolution was measured in air and in water. In air, the Neurocam resolution at the centre of the field-of-view is 9.0 and 10.7 mm full width at half-maximum (FWHM) with the collimators, respectively, and is isotropic in the three orthogonal planes; the resolution of the GE 400XCT with its 13-cm radius of rotation is 10.3 and 11.7 mm, respectively. For the Neurocam with the HR collimator, the transaxial FWHM values in water were 9.7 mm at the centre and 9.5 mm radial (6.6 mm tangential) at 8 cm from the centre. The physical characteristics of the Neurocam enable the routine acquisition of brain perfusion data with Tc-99m hexamethyl-propylene amine oxime in about 14 min, yielding better image quality than with a single rotating camera in 40 min. (orig./HP)

  5. Precise measurement of a subpicosecond electron single bunch by the femtosecond streak camera

    International Nuclear Information System (INIS)

    Uesaka, M.; Ueda, T.; Kozawa, T.; Kobayashi, T.

    1998-01-01

    Precise measurement of a subpicosecond electron single bunch by the femtosecond streak camera is presented. The subpicosecond electron single bunch of energy 35 MeV was generated by the achromatic magnetic pulse compressor at the S-band linear accelerator of nuclear engineering research laboratory (NERL), University of Tokyo. The electric charge per bunch and beam size are 0.5 nC and the horizontal and vertical beam sizes are 3.3 and 5.5 mm (full width at half maximum; FWHM), respectively. Pulse shape of the electron single bunch is measured via Cherenkov radiation emitted in air by the femtosecond streak camera. Optical parameters of the optical measurement system were optimized based on much experiment and numerical analysis in order to achieve a subpicosecond time resolution. By using the optimized optical measurement system, the subpicosecond pulse shape, its variation for the differents rf phases in the accelerating tube, the jitter of the total system and the correlation between measured streak images and calculated longitudinal phase space distributions were precisely evaluated. This measurement system is going to be utilized in several subpicosecond analyses for radiation physics and chemistry. (orig.)

  6. Age is highly associated with stereo blindness among surgeons

    DEFF Research Database (Denmark)

    Fergo, Charlotte; Burcharth, Jakob; Pommergaard, Hans-Christian

    2016-01-01

    BACKGROUND: The prevalence of stereo blindness in the general population varies greatly within a range of 1-30 %. Stereo vision adds an extra dimension to aid depth perception and gives a binocular advantage in task completion. Lack of depth perception may lower surgical performance, potentially...... and stereo tested by the use of the Random Dot E stereo test. Upon stereo testing, a demographic questionnaire was completed. Multivariate logistic regression analysis was employed to assess the association between stereo blindness and the variables resulting from the univariate analysis. RESULTS: Three...

  7. Single nucleotide polymorphism (SNP) detection on a magnetoresistive sensor

    DEFF Research Database (Denmark)

    Rizzi, Giovanni; Østerberg, Frederik Westergaard; Dufva, Martin

    2013-01-01

    We present a magnetoresistive sensor platform for hybridization assays and demonstrate its applicability on single nucleotide polymorphism (SNP) genotyping. The sensor relies on anisotropic magnetoresistance in a new geometry with a local negative reference and uses the magnetic field from...... the sensor bias current to magnetize magnetic beads in the vicinity of the sensor. The method allows for real-time measurements of the specific bead binding to the sensor surface during DNA hybridization and washing. Compared to other magnetic biosensing platforms, our approach eliminates the need...... for external electromagnets and thus allows for miniaturization of the sensor platform....

  8. Nuclear Radiation Degradation Study on HD Camera Based on CMOS Image Sensor at Different Dose Rates

    Directory of Open Access Journals (Sweden)

    Congzheng Wang

    2018-02-01

    Full Text Available In this work, we irradiated a high-definition (HD industrial camera based on a commercial-off-the-shelf (COTS CMOS image sensor (CIS with Cobalt-60 gamma-rays. All components of the camera under test were fabricated without radiation hardening, except for the lens. The irradiation experiments of the HD camera under biased conditions were carried out at 1.0, 10.0, 20.0, 50.0 and 100.0 Gy/h. During the experiment, we found that the tested camera showed a remarkable degradation after irradiation and differed in the dose rates. With the increase of dose rate, the same target images become brighter. Under the same dose rate, the radiation effect in bright area is lower than that in dark area. Under different dose rates, the higher the dose rate is, the worse the radiation effect will be in both bright and dark areas. And the standard deviations of bright and dark areas become greater. Furthermore, through the progressive degradation analysis of the captured image, experimental results demonstrate that the attenuation of signal to noise ratio (SNR versus radiation time is not obvious at the same dose rate, and the degradation is more and more serious with increasing dose rate. Additionally, the decrease rate of SNR at 20.0, 50.0 and 100.0 Gy/h is far greater than that at 1.0 and 10.0 Gy/h. Even so, we confirm that the HD industrial camera is still working at 10.0 Gy/h during the 8 h of measurements, with a moderate decrease of the SNR (5 dB. The work is valuable and can provide suggestion for camera users in the radiation field.

  9. THE EXAMPLE OF USING THE XIAOMI CAMERAS IN INVENTORY OF MONUMENTAL OBJECTS - FIRST RESULTS

    Directory of Open Access Journals (Sweden)

    J. S. Markiewicz

    2017-11-01

    Full Text Available At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. Today, photogrammetry is becoming more and more popular and is becoming the standard of documentation in many projects involving the recording of all possible spatial data on landscape, architecture, or even single objects. Low-cost sensors allow for the creation of reliable and accurate three-dimensional models of investigated objects. This paper presents the results of a comparison between the outcomes obtained when using three sources of image: low-cost Xiaomi cameras, a full-frame camera (Canon 5D Mark II and middle-frame camera (Hasselblad-Hd4. In order to check how the results obtained from the two sensors differ the following parameters were analysed: the accuracy of the orientation of the ground level photos on the control and check points, the distribution of appointed distortion in the self-calibration process, the flatness of the walls, the discrepancies between point clouds from the low-cost cameras and references data. The results presented below are a result of co-operation of researchers from three institutions: the Systems Research Institute PAS, The Department of Geodesy and Cartography at the Warsaw University of Technology and the National Museum in Warsaw.

  10. The Example of Using the Xiaomi Cameras in Inventory of Monumental Objects - First Results

    Science.gov (United States)

    Markiewicz, J. S.; Łapiński, S.; Bienkowski, R.; Kaliszewska, A.

    2017-11-01

    At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. Today, photogrammetry is becoming more and more popular and is becoming the standard of documentation in many projects involving the recording of all possible spatial data on landscape, architecture, or even single objects. Low-cost sensors allow for the creation of reliable and accurate three-dimensional models of investigated objects. This paper presents the results of a comparison between the outcomes obtained when using three sources of image: low-cost Xiaomi cameras, a full-frame camera (Canon 5D Mark II) and middle-frame camera (Hasselblad-Hd4). In order to check how the results obtained from the two sensors differ the following parameters were analysed: the accuracy of the orientation of the ground level photos on the control and check points, the distribution of appointed distortion in the self-calibration process, the flatness of the walls, the discrepancies between point clouds from the low-cost cameras and references data. The results presented below are a result of co-operation of researchers from three institutions: the Systems Research Institute PAS, The Department of Geodesy and Cartography at the Warsaw University of Technology and the National Museum in Warsaw.

  11. Range-Measuring Video Sensors

    Science.gov (United States)

    Howard, Richard T.; Briscoe, Jeri M.; Corder, Eric L.; Broderick, David

    2006-01-01

    Optoelectronic sensors of a proposed type would perform the functions of both electronic cameras and triangulation- type laser range finders. That is to say, these sensors would both (1) generate ordinary video or snapshot digital images and (2) measure the distances to selected spots in the images. These sensors would be well suited to use on robots that are required to measure distances to targets in their work spaces. In addition, these sensors could be used for all the purposes for which electronic cameras have been used heretofore. The simplest sensor of this type, illustrated schematically in the upper part of the figure, would include a laser, an electronic camera (either video or snapshot), a frame-grabber/image-capturing circuit, an image-data-storage memory circuit, and an image-data processor. There would be no moving parts. The laser would be positioned at a lateral distance d to one side of the camera and would be aimed parallel to the optical axis of the camera. When the range of a target in the field of view of the camera was required, the laser would be turned on and an image of the target would be stored and preprocessed to locate the angle (a) between the optical axis and the line of sight to the centroid of the laser spot.

  12. Remote control video cameras on a suborbital rocket

    International Nuclear Information System (INIS)

    Wessling, Francis C.

    1997-01-01

    Three video cameras were controlled in real time from the ground to a sub-orbital rocket during a fifteen minute flight from White Sands Missile Range in New Mexico. Telemetry communications with the rocket allowed the control of the cameras. The pan, tilt, zoom, focus, and iris of two of the camera lenses, the power and record functions of the three cameras, and also the analog video signal that would be sent to the ground was controlled by separate microprocessors. A microprocessor was used to record data from three miniature accelerometers, temperature sensors and a differential pressure sensor. In addition to the selected video signal sent to the ground and recorded there, the video signals from the three cameras also were recorded on board the rocket. These recorders were mounted inside the pressurized segment of the rocket payload. The lenses, lens control mechanisms, and the three small television cameras were located in a portion of the rocket payload that was exposed to the vacuum of space. The accelerometers were also exposed to the vacuum of space

  13. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    International Nuclear Information System (INIS)

    Anderson, Robert J.

    2014-01-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  14. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  15. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Robotic and Security Systems Dept.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  16. A phase-based stereo vision system-on-a-chip.

    Science.gov (United States)

    Díaz, Javier; Ros, Eduardo; Sabatini, Silvio P; Solari, Fabio; Mota, Sonia

    2007-02-01

    A simple and fast technique for depth estimation based on phase measurement has been adopted for the implementation of a real-time stereo system with sub-pixel resolution on an FPGA device. The technique avoids the attendant problem of phase warping. The designed system takes full advantage of the inherent processing parallelism and segmentation capabilities of FPGA devices to achieve a computation speed of 65megapixels/s, which can be arranged with a customized frame-grabber module to process 211frames/s at a size of 640x480 pixels. The processing speed achieved is higher than conventional camera frame rates, thus allowing the system to extract multiple estimations and be used as a platform to evaluate integration schemes of a population of neurons without increasing hardware resource demands.

  17. When Dijkstra Meets Vanishing Point: A Stereo Vision Approach for Road Detection.

    Science.gov (United States)

    Zhang, Yigong; Su, Yingna; Yang, Jian; Ponce, Jean; Kong, Hui

    2018-05-01

    In this paper, we propose a vanishing-point constrained Dijkstra road model for road detection in a stereo-vision paradigm. First, the stereo-camera is used to generate the u- and v-disparity maps of road image, from which the horizon can be extracted. With the horizon and ground region constraints, we can robustly locate the vanishing point of road region. Second, a weighted graph is constructed using all pixels of the image, and the detected vanishing point is treated as the source node of the graph. By computing a vanishing-point constrained Dijkstra minimum-cost map, where both disparity and gradient of gray image are used to calculate cost between two neighbor pixels, the problem of detecting road borders in image is transformed into that of finding two shortest paths that originate from the vanishing point to two pixels in the last row of image. The proposed approach has been implemented and tested over 2600 grayscale images of different road scenes in the KITTI data set. The experimental results demonstrate that this training-free approach can detect horizon, vanishing point, and road regions very accurately and robustly. It can achieve promising performance.

  18. Stereo vision enhances the learning of a catching skill

    NARCIS (Netherlands)

    Mazyn, L.; Lenoir, M.; Montagne, G.; Delaey, C; Savelsbergh, G.J.P.

    2007-01-01

    The aim of this study was to investigate the contribution of stereo vision to the acquisition of a natural interception task. Poor catchers with good (N = 8; Stereo+) and weak (N = 6; Stereo-) stereo vision participated in an intensive training program spread over 2 weeks, during which they caught

  19. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    Science.gov (United States)

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-09-14

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  20. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    Science.gov (United States)

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-01-01

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178

  1. Bathymetric Structure from Motion Photogrammetry: Extracting stream bathymetry from multi-view stereo photogrammetry

    Science.gov (United States)

    Dietrich, J. T.

    2016-12-01

    Stream bathymetry is a critical variable in a number of river science applications. In larger rivers, bathymetry can be measured with instruments such as sonar (single or multi-beam), bathymetric airborne LiDAR, or acoustic doppler current profilers. However, in smaller streams with depths less than 2 meters, bathymetry is one of the more difficult variables to map at high-resolution. Optical remote sensing techniques offer several potential solutions for collecting high-resolution bathymetry. In this research, I focus on direct photogrammetric measurements of bathymetry using multi-view stereo photogrammetry, specifically Structure from Motion (SfM). The main barrier to accurate bathymetric mapping with any photogrammetric technique is correcting for the refraction of light as it passes between the two different media (air and water), which causes water depths to appear shallower than they are. I propose and test an iterative approach that calculates a series of refraction correction equations for every point/camera combination in a SfM point cloud. This new method is meant to address shortcomings of other correction techniques and works within the current preferred method for SfM data collection, oblique and highly convergent photographs. The multi-camera refraction correction presented here produces bathymetric datasets with accuracies of 0.02% of the flying height and precisions of 0.1% of the flying height. This methodology, like many fluvial remote sensing methods, will only work under ideal conditions (e.g. clear water), but it provides an additional tool for collecting high-resolution bathymetric datasets for a variety of river, coastal, and estuary systems.

  2. Exploring Digital Surface Models from Nine Different Sensors for Forest Monitoring and Change Detection

    Directory of Open Access Journals (Sweden)

    Jiaojiao Tian

    2017-03-01

    Full Text Available Digital surface models (DSMs derived from spaceborne and airborne sensors enable the monitoring of the vertical structures for forests in large areas. Nevertheless, due to the lack of an objective performance assessment for this task, it is difficult to select the most appropriate data source for DSM generation. In order to fill this gap, this paper performs change detection analysis including forest decrease and tree growth. The accuracy of the DSMs is evaluated by comparison with measured tree heights from inventory plots (field data. In addition, the DSMs are compared with LiDAR data to perform a pixel-wise quality assessment. DSMs from four different satellite stereo sensors (ALOS/PRISM, Cartosat-1, RapidEye and WorldView-2, one satellite InSAR sensor (TanDEM-X, two aerial stereo camera systems (HRSC and UltraCam and two airborne laser scanning datasets with different point densities are adopted for the comparison. The case study is a complex central European temperate forest close to Traunstein in Bavaria, Germany. As a major experimental result, the quality of the DSM is found to be robust to variations in image resolution, especially when the forest density is high. The forest decrease results confirm that besides aerial photogrammetry data, very high resolution satellite data, such as WorldView-2, can deliver results with comparable quality as the ones derived from LiDAR, followed by TanDEM-X and Cartosat DSMs. The quality of the DSMs derived from ALOS and Rapid-Eye data is lower, but the main changes are still correctly highlighted. Moreover, the vertical tree growth and their relationship with tree height are analyzed. The major tree height in the study site is between 15 and 30 m and the periodic annual increments (PAIs are in the range of 0.30–0.50 m.

  3. Laying the foundation to use Raspberry Pi 3 V2 camera module imagery for scientific and engineering purposes

    Science.gov (United States)

    Pagnutti, Mary; Ryan, Robert E.; Cazenavette, George; Gold, Maxwell; Harlan, Ryan; Leggett, Edward; Pagnutti, James

    2017-01-01

    A comprehensive radiometric characterization of raw-data format imagery acquired with the Raspberry Pi 3 and V2.1 camera module is presented. The Raspberry Pi is a high-performance single-board computer designed to educate and solve real-world problems. This small computer supports a camera module that uses a Sony IMX219 8 megapixel CMOS sensor. This paper shows that scientific and engineering-grade imagery can be produced with the Raspberry Pi 3 and its V2.1 camera module. Raw imagery is shown to be linear with exposure and gain (ISO), which is essential for scientific and engineering applications. Dark frame, noise, and exposure stability assessments along with flat fielding results, spectral response measurements, and absolute radiometric calibration results are described. This low-cost imaging sensor, when calibrated to produce scientific quality data, can be used in computer vision, biophotonics, remote sensing, astronomy, high dynamic range imaging, and security applications, to name a few.

  4. INVESTIGATING THE SUITABILITY OF MIRRORLESS CAMERAS IN TERRESTRIAL PHOTOGRAMMETRIC APPLICATIONS

    Directory of Open Access Journals (Sweden)

    A. H. Incekara

    2017-11-01

    Full Text Available Digital single-lens reflex cameras (DSLR which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700 and the other without a mirror (Sony a6000, were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.

  5. Investigating the Suitability of Mirrorless Cameras in Terrestrial Photogrammetric Applications

    Science.gov (United States)

    Incekara, A. H.; Seker, D. Z.; Delen, A.; Acar, A.

    2017-11-01

    Digital single-lens reflex cameras (DSLR) which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700) and the other without a mirror (Sony a6000), were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU) Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.

  6. Multi-Level Wavelet Shannon Entropy-Based Method for Single-Sensor Fault Location

    Directory of Open Access Journals (Sweden)

    Qiaoning Yang

    2015-10-01

    Full Text Available In actual application, sensors are prone to failure because of harsh environments, battery drain, and sensor aging. Sensor fault location is an important step for follow-up sensor fault detection. In this paper, two new multi-level wavelet Shannon entropies (multi-level wavelet time Shannon entropy and multi-level wavelet time-energy Shannon entropy are defined. They take full advantage of sensor fault frequency distribution and energy distribution across multi-subband in wavelet domain. Based on the multi-level wavelet Shannon entropy, a method is proposed for single sensor fault location. The method firstly uses a criterion of maximum energy-to-Shannon entropy ratio to select the appropriate wavelet base for signal analysis. Then multi-level wavelet time Shannon entropy and multi-level wavelet time-energy Shannon entropy are used to locate the fault. The method is validated using practical chemical gas concentration data from a gas sensor array. Compared with wavelet time Shannon entropy and wavelet energy Shannon entropy, the experimental results demonstrate that the proposed method can achieve accurate location of a single sensor fault and has good anti-noise ability. The proposed method is feasible and effective for single-sensor fault location.

  7. Video content analysis on body-worn cameras for retrospective investigation

    NARCIS (Netherlands)

    Bouma, H.; Baan, J.; Haar, F.B. ter; Eendebak, P.T.; Hollander, R.J.M. den; Burghouts, G.J.; Wijn, R.; Broek, S.P. van den; Rest, J.H.C. van

    2015-01-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications

  8. Camera processing with chromatic aberration.

    Science.gov (United States)

    Korneliussen, Jan Tore; Hirakawa, Keigo

    2014-10-01

    Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new postcapture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected.

  9. Temperature measurement with industrial color camera devices

    Science.gov (United States)

    Schmidradler, Dieter J.; Berndorfer, Thomas; van Dyck, Walter; Pretschuh, Juergen

    1999-05-01

    This paper discusses color camera based temperature measurement. Usually, visual imaging and infrared image sensing are treated as two separate disciplines. We will show, that a well selected color camera device might be a cheaper, more robust and more sophisticated solution for optical temperature measurement in several cases. Herein, only implementation fragments and important restrictions for the sensing element will be discussed. Our aim is to draw the readers attention to the use of visual image sensors for measuring thermal radiation and temperature and to give reasons for the need of improved technologies for infrared camera devices. With AVL-List, our partner of industry, we successfully used the proposed sensor to perform temperature measurement for flames inside the combustion chamber of diesel engines which finally led to the presented insights.

  10. Parallel Computer System for 3D Visualization Stereo on GPU

    Science.gov (United States)

    Al-Oraiqat, Anas M.; Zori, Sergii A.

    2018-03-01

    This paper proposes the organization of a parallel computer system based on Graphic Processors Unit (GPU) for 3D stereo image synthesis. The development is based on the modified ray tracing method developed by the authors for fast search of tracing rays intersections with scene objects. The system allows significant increase in the productivity for the 3D stereo synthesis of photorealistic quality. The generalized procedure of 3D stereo image synthesis on the Graphics Processing Unit/Graphics Processing Clusters (GPU/GPC) is proposed. The efficiency of the proposed solutions by GPU implementation is compared with single-threaded and multithreaded implementations on the CPU. The achieved average acceleration in multi-thread implementation on the test GPU and CPU is about 7.5 and 1.6 times, respectively. Studying the influence of choosing the size and configuration of the computational Compute Unified Device Archi-tecture (CUDA) network on the computational speed shows the importance of their correct selection. The obtained experimental estimations can be significantly improved by new GPUs with a large number of processing cores and multiprocessors, as well as optimized configuration of the computing CUDA network.

  11. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    Science.gov (United States)

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  12. Beagle 2: Seeking the signatures of life on Mars

    OpenAIRE

    Gibson Jr., E. K.; Pillinger, Colin T.; Wright, Ian P.; Morse, Andy; Stewart, Jenny; Morgan, G.; Praine, Ian; Leigh, Dennis; Sims, Mark R.; Pullan, Derek

    2003-01-01

    ESA's Beagle 2 lander will land on Mars to search for signatures of present and past life. A Gas Analysis Package (GAP) with a mass spectrometer, XRF, Mossbauer, stereo cameras, microscope, environmental sensors, rock corer/grinder, and a Mole attachment are on the lander.

  13. Predicting Long-Range Traversability from Short-Range Stereo-Derived Geometry

    Science.gov (United States)

    Turmon, Michael; Tang, Benyang; Howard, Andrew; Brjaracharya, Max

    2010-01-01

    Based only on its appearance in imagery, this program uses close-range 3D terrain analysis to produce training data sufficient to estimate the traversability of terrain beyond 3D sensing range. This approach is called learning from stereo (LFS). In effect, the software transfers knowledge from middle distances, where 3D geometry provides training cues, into the far field where only appearance is available. This is a viable approach because the same obstacle classes, and sometimes the same obstacles, are typically present in the mid-field and the farfield. Learning thus extends the effective look-ahead distance of the sensors.

  14. The iQID camera: An ionizing-radiation quantum imaging detector

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Brian W., E-mail: brian.miller@pnnl.gov [Pacific Northwest National Laboratory, Richland, WA 99352 (United States); College of Optical Sciences, The University of Arizona, Tucson, AZ 85719 (United States); Gregory, Stephanie J.; Fuller, Erin S. [Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Barrett, Harrison H.; Bradford Barber, H.; Furenlid, Lars R. [Center for Gamma-Ray Imaging, The University of Arizona, Tucson, AZ 85719 (United States); College of Optical Sciences, The University of Arizona, Tucson, AZ 85719 (United States)

    2014-12-11

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detector's response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The confirmed response to this broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated by particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. The spatial location and energy of individual particles are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, excellent detection efficiency for charged particles, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discriminate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is real-time, single-particle digital autoradiography. We present the latest results and discuss potential applications.

  15. A new omni-directional multi-camera system for high resolution surveillance

    Science.gov (United States)

    Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2014-05-01

    Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.

  16. Automatic Detection and Reproduction of Natural Head Position in Stereo-Photogrammetry.

    Science.gov (United States)

    Hsung, Tai-Chiu; Lo, John; Li, Tik-Shun; Cheung, Lim-Kwong

    2015-01-01

    The aim of this study was to develop an automatic orientation calibration and reproduction method for recording the natural head position (NHP) in stereo-photogrammetry (SP). A board was used as the physical reference carrier for true verticals and NHP alignment mirror orientation. Orientation axes were detected and saved from the digital mesh model of the board. They were used for correcting the pitch, roll and yaw angles of the subsequent captures of patients' facial surfaces, which were obtained without any markings or sensors attached onto the patient. We tested the proposed method on two commercial active (3dMD) and passive (DI3D) SP devices. The reliability of the pitch, roll and yaw for the board placement were within ±0.039904°, ±0.081623°, and ±0.062320°; where standard deviations were 0.020234°, 0.045645° and 0.027211° respectively. Orientation-calibrated stereo-photogrammetry is the most accurate method (angulation deviation within ±0.1°) reported for complete NHP recording with insignificant clinical error.

  17. Rover's Wheel Churns Up Bright Martian Soil (Stereo)

    Science.gov (United States)

    2009-01-01

    NASA's Mars Exploration Rover Spirit acquired this mosaic on the mission's 1,202nd Martian day, or sol (May 21, 2007), while investigating the area east of the elevated plateau known as 'Home Plate' in the 'Columbia Hills.' The mosaic shows an area of disturbed soil, nicknamed 'Gertrude Weise' by scientists, made by Spirit's stuck right front wheel. The trench exposed a patch of nearly pure silica, with the composition of opal. It could have come from either a hot-spring environment or an environment called a fumarole, in which acidic, volcanic steam rises through cracks. Either way, its formation involved water, and on Earth, both of these types of settings teem with microbial life. Multiple images taken with Spirit's panoramic camera are combined here into a stereo view that appears three-dimensional when seen through red-blue glasses, with the red lens on the left.

  18. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    Science.gov (United States)

    Høye, Gudrun; Fridman, Andrei

    2013-05-06

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.

  19. Performance benefits and limitations of a camera network

    Science.gov (United States)

    Carr, Peter; Thomas, Paul J.; Hornsey, Richard

    2005-06-01

    Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.

  20. SINGLE IMAGE CAMERA CALIBRATION IN CLOSE RANGE PHOTOGRAMMETRY FOR SOLDER JOINT ANALYSIS

    Directory of Open Access Journals (Sweden)

    D. Heinemann

    2016-06-01

    Full Text Available Printed Circuit Boards (PCB play an important role in the manufacturing of electronic devices. To ensure a correct function of the PCBs a certain amount of solder paste is needed during the placement of components. The aim of the current research is to develop an real-time, closed-loop solution for the analysis of the printing process where solder is printed onto PCBs. Close range photogrammetry allows for determination of the solder volume and a subsequent correction if necessary. Photogrammetry is an image based method for three dimensional reconstruction from two dimensional image data of an object. A precise camera calibration is indispensable for an accurate reconstruction. In our certain application it is not possible to use calibration methods with two dimensional calibration targets. Therefore a special calibration target was developed and manufactured, which allows for single image camera calibration.

  1. Hearing damage by personal stereo

    DEFF Research Database (Denmark)

    Hammershøi, Dorte; Ordoñez, Rodrigo Pizarro; Reuter, Karen

    2006-01-01

    The technological development within personal stereo systems, such as MP3 players, iPods etc., has changed music listening habits from home entertainment to everyday and everywhere use. The technology has developed considerably, since the introduction of CD walkmen, and high-level low-distortion ......The technological development within personal stereo systems, such as MP3 players, iPods etc., has changed music listening habits from home entertainment to everyday and everywhere use. The technology has developed considerably, since the introduction of CD walkmen, and high-level low......-distortion music is produced by minimal devices. In this paper, the existing literature on effects of personal stereo systems is reviewed, incl. studies of exposure levels, and effects on hearing. Generally, it is found that the levels being used is of concern, which in one study [Acustica?Acta Acustica, 82 (1996......) 885?894] is demonstrated to relate to the specific use in situations with high levels of background noise. Another study [Med. J. Austr., 1998; 169: 588-592], demonstrates that the effect of personal stereo is comparable to that of being exposed to noise in industry. The results are discussed in view...

  2. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing

  3. Event detection intelligent camera development

    International Nuclear Information System (INIS)

    Szappanos, A.; Kocsis, G.; Molnar, A.; Sarkozi, J.; Zoletnik, S.

    2008-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized for digital signal processor (DSP) and field programmable gate array (FPGA) architectures. At full resolution a camera sends 12 bit sampled 1280 x 1024 pixels with 444 fps which means 1.43 Terabyte over half an hour. To analyse such a huge amount of data is time consuming and has a high computational complexity. We plan to overcome this problem by EDICAM's preprocessing concepts. EDICAM camera system integrates all the advantages of CMOS sensor chip technology and fast network connections. EDICAM is built up from three different modules with two interfaces. A sensor module (SM) with reduced hardware and functional elements to reach a small and compact size and robust action in harmful environment as well. An image processing and control unit (IPCU) module handles the entire user predefined events and runs image processing algorithms to generate trigger signals. Finally a 10 Gigabit Ethernet compatible image readout card functions as the network interface for the PC. In this contribution all the concepts of EDICAM and the functions of the distinct modules are described

  4. Multi-MGy Radiation Hardened Camera for Nuclear Facilities

    International Nuclear Information System (INIS)

    Girard, Sylvain; Boukenter, Aziz; Ouerdane, Youcef; Goiffon, Vincent; Corbiere, Franck; Rolando, Sebastien; Molina, Romain; Estribeau, Magali; Avon, Barbara; Magnan, Pierre; Paillet, Philippe; Duhamel, Olivier; Gaillardin, Marc; Raine, Melanie

    2015-01-01

    There is an increasing interest in developing cameras for surveillance systems to monitor nuclear facilities or nuclear waste storages. Particularly, for today's and the next generation of nuclear facilities increasing safety requirements consecutive to Fukushima Daiichi's disaster have to be considered. For some applications, radiation tolerance needs to overcome doses in the MGy(SiO 2 ) range whereas the most tolerant commercial or prototypes products based on solid state image sensors withstand doses up to few kGy. The objective of this work is to present the radiation hardening strategy developed by our research groups to enhance the tolerance to ionizing radiations of the various subparts of these imaging systems by working simultaneously at the component and system design levels. Developing radiation-hardened camera implies to combine several radiation-hardening strategies. In our case, we decided not to use the simplest one, the shielding approach. This approach is efficient but limits the camera miniaturization and is not compatible with its future integration in remote-handling or robotic systems. Then, the hardening-by-component strategy appears mandatory to avoid the failure of one of the camera subparts at doses lower than the MGy. Concerning the image sensor itself, the used technology is a CMOS Image Sensor (CIS) designed by ISAE team with custom pixel designs used to mitigate the total ionizing dose (TID) effects that occur well below the MGy range in classical image sensors (e.g. Charge Coupled Devices (CCD), Charge Injection Devices (CID) and classical Active Pixel Sensors (APS)), such as the complete loss of functionality, the dark current increase and the gain drop. We'll present at the conference a comparative study between these radiation-hardened pixel radiation responses with respect to conventional ones, demonstrating the efficiency of the choices made. The targeted strategy to develop the complete radiation hard camera

  5. Multi-MGy Radiation Hardened Camera for Nuclear Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Girard, Sylvain; Boukenter, Aziz; Ouerdane, Youcef [Universite de Saint-Etienne, Lab. Hubert Curien, UMR-CNRS 5516, F-42000 Saint-Etienne (France); Goiffon, Vincent; Corbiere, Franck; Rolando, Sebastien; Molina, Romain; Estribeau, Magali; Avon, Barbara; Magnan, Pierre [ISAE, Universite de Toulouse, F-31055 Toulouse (France); Paillet, Philippe; Duhamel, Olivier; Gaillardin, Marc; Raine, Melanie [CEA, DAM, DIF, F-91297 Arpajon (France)

    2015-07-01

    There is an increasing interest in developing cameras for surveillance systems to monitor nuclear facilities or nuclear waste storages. Particularly, for today's and the next generation of nuclear facilities increasing safety requirements consecutive to Fukushima Daiichi's disaster have to be considered. For some applications, radiation tolerance needs to overcome doses in the MGy(SiO{sub 2}) range whereas the most tolerant commercial or prototypes products based on solid state image sensors withstand doses up to few kGy. The objective of this work is to present the radiation hardening strategy developed by our research groups to enhance the tolerance to ionizing radiations of the various subparts of these imaging systems by working simultaneously at the component and system design levels. Developing radiation-hardened camera implies to combine several radiation-hardening strategies. In our case, we decided not to use the simplest one, the shielding approach. This approach is efficient but limits the camera miniaturization and is not compatible with its future integration in remote-handling or robotic systems. Then, the hardening-by-component strategy appears mandatory to avoid the failure of one of the camera subparts at doses lower than the MGy. Concerning the image sensor itself, the used technology is a CMOS Image Sensor (CIS) designed by ISAE team with custom pixel designs used to mitigate the total ionizing dose (TID) effects that occur well below the MGy range in classical image sensors (e.g. Charge Coupled Devices (CCD), Charge Injection Devices (CID) and classical Active Pixel Sensors (APS)), such as the complete loss of functionality, the dark current increase and the gain drop. We'll present at the conference a comparative study between these radiation-hardened pixel radiation responses with respect to conventional ones, demonstrating the efficiency of the choices made. The targeted strategy to develop the complete radiation hard camera

  6. Distributed Detection with Collisions in a Random, Single-Hop Wireless Sensor Network

    Science.gov (United States)

    2013-05-26

    public release; distribution is unlimited. Distributed detection with collisions in a random, single-hop wireless sensor network The views, opinions...1274 2 ABSTRACT Distributed detection with collisions in a random, single-hop wireless sensor network Report Title We consider the problem of... WIRELESS SENSOR NETWORK Gene T. Whipps?† Emre Ertin† Randolph L. Moses† ?U.S. Army Research Laboratory, Adelphi, MD 20783 †The Ohio State University

  7. Single-Crystal Sapphire Optical Fiber Sensor Instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Pickrell, Gary [Virginia Polytechnic Inst. & State Univ., Blacksburg, VA (United States); Scott, Brian [Virginia Polytechnic Inst. & State Univ., Blacksburg, VA (United States); Wang, Anbo [Virginia Polytechnic Inst. & State Univ., Blacksburg, VA (United States); Yu, Zhihao [Virginia Polytechnic Inst. & State Univ., Blacksburg, VA (United States)

    2013-12-31

    This report summarizes technical progress on the program “Single-Crystal Sapphire Optical Fiber Sensor Instrumentation,” funded by the National Energy Technology Laboratory of the U.S. Department of Energy, and performed by the Center for Photonics Technology of the Bradley Department of Electrical and Computer Engineering at Virginia Tech. This project was completed in three phases, each with a separate focus. Phase I of the program, from October 1999 to April 2002, was devoted to development of sensing schema for use in high temperature, harsh environments. Different sensing designs were proposed and tested in the laboratory. Phase II of the program, from April 2002 to April 2009, focused on bringing the sensor technologies, which had already been successfully demonstrated in the laboratory, to a level where the sensors could be deployed in harsh industrial environments and eventually become commercially viable through a series of field tests. Also, a new sensing scheme was developed and tested with numerous advantages over all previous ones in Phase II. Phase III of the program, September 2009 to December 2013, focused on development of the new sensing scheme for field testing in conjunction with materials engineering of the improved sensor packaging lifetimes. In Phase I, three different sensing principles were studied: sapphire air-gap extrinsic Fabry-Perot sensors; intensity-based polarimetric sensors; and broadband polarimetric sensors. Black body radiation tests and corrosion tests were also performed in this phase. The outcome of the first phase of this program was the selection of broadband polarimetric differential interferometry (BPDI) for further prototype instrumentation development. This approach is based on the measurement of the optical path difference (OPD) between two orthogonally polarized light beams in a single-crystal sapphire disk. At the beginning of Phase II, in June 2004, the BPDI sensor was tested at the Wabash River coal gasifier

  8. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors.

    Science.gov (United States)

    Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung

    2017-05-08

    Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  9. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Jong Hyun Kim

    2017-05-01

    Full Text Available Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1 and two open databases (Korea advanced institute of science and technology (KAIST and computer vision center (CVC databases, as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  10. An Inexpensive Method for Kinematic Calibration of a Parallel Robot by Using One Hand-Held Camera as Main Sensor

    Directory of Open Access Journals (Sweden)

    Ricardo Carelli

    2013-08-01

    Full Text Available This paper presents a novel method for the calibration of a parallel robot, which allows a more accurate configuration instead of a configuration based on nominal parameters. It is used, as the main sensor with one camera installed in the robot hand that determines the relative position of the robot with respect to a spherical object fixed in the working area of the robot. The positions of the end effector are related to the incremental positions of resolvers of the robot motors. A kinematic model of the robot is used to find a new group of parameters, which minimizes errors in the kinematic equations. Additionally, properties of the spherical object and intrinsic camera parameters are utilized to model the projection of the object in the image and thereby improve spatial measurements. Finally, several working tests, static and tracking tests are executed in order to verify how the robotic system behaviour improves by using calibrated parameters against nominal parameters. In order to emphasize that, this proposed new method uses neither external nor expensive sensor. That is why new robots are useful in teaching and research activities.

  11. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    Science.gov (United States)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  12. Large area CMOS image sensors

    International Nuclear Information System (INIS)

    Turchetta, R; Guerrini, N; Sedgwick, I

    2011-01-01

    CMOS image sensors, also known as CMOS Active Pixel Sensors (APS) or Monolithic Active Pixel Sensors (MAPS), are today the dominant imaging devices. They are omnipresent in our daily life, as image sensors in cellular phones, web cams, digital cameras, ... In these applications, the pixels can be very small, in the micron range, and the sensors themselves tend to be limited in size. However, many scientific applications, like particle or X-ray detection, require large format, often with large pixels, as well as other specific performance, like low noise, radiation hardness or very fast readout. The sensors are also required to be sensitive to a broad spectrum of radiation: photons from the silicon cut-off in the IR down to UV and X- and gamma-rays through the visible spectrum as well as charged particles. This requirement calls for modifications to the substrate to be introduced to provide optimized sensitivity. This paper will review existing CMOS image sensors, whose size can be as large as a single CMOS wafer, and analyse the technical requirements and specific challenges of large format CMOS image sensors.

  13. Stereo Matching Based On Election Campaign Algorithm

    Directory of Open Access Journals (Sweden)

    Xie Qing Hua

    2016-01-01

    Full Text Available Stereo matching is one of the significant problems in the study of the computer vision. By getting the distance information through pixels, it is possible to reproduce a three-dimensional stereo. In this paper, the edges are the primitives for matching, the grey values of the edges and the magnitude and direction of the edge gradient were figured out as the properties of the edge feature points, according to the constraints for stereo matching, the energy function was built for finding the route minimizing by election campaign optimization algorithm during the process of stereo matching was applied to this problem the energy function. Experiment results show that this algorithm is more stable and it can get the matching result with better accuracy.

  14. MAPPING THE SURROUNDINGS AS A REQUIREMENT FOR AUTONOMOUS DRIVING

    Directory of Open Access Journals (Sweden)

    M. Steininger

    2016-11-01

    Full Text Available Motivated by the hype around driverless cars and the challenges of the sensor integration and data processing, this paper presents a model for using a XBox One Microsoft Kinect stereo camera as sensor for mapping the surroundings. Today, the recognition of the environment of the car is mostly done by a mix of sensors like LiDAR, RADAR and cameras. In the case of the outdoor delivery challenge Robotour 2016 with model cars in scale 1:5, it is our goal to solve the task with one camera only. To this end, a three-stage approach was developed. The test results show that our approach can detect and locate objects at a range of up to eight meters in order to incorporate them as barriers in the navigation process.

  15. Theory and applications of smart cameras

    CERN Document Server

    2016-01-01

    This book presents an overview of smart camera systems, considering practical applications but also reviewing fundamental aspects of the underlying technology.  It introduces in a tutorial style the principles of sensing and signal processing, and also describes topics such as wireless connection to the Internet of Things (IoT) which is expected to be the biggest market for smart cameras. It is an excellent guide to the fundamental of smart camera technology, and the chapters complement each other well as the authors have worked as a team under the auspice of GFP(Global Frontier Project), the largest-scale funded research in Korea.  This is the third of three books based on the Integrated Smart Sensors research project, which describe the development of innovative devices, circuits, and system-level enabling technologies.  The aim of the project was to develop common platforms on which various devices and sensors can be loaded, and to create systems offering significant improvements in information processi...

  16. Kinder, gentler stereo

    Science.gov (United States)

    Siegel, Mel; Tobinaga, Yoshikazu; Akiya, Takeo

    1999-05-01

    Not only binocular perspective disparity, but also many secondary binocular and monocular sensory phenomena, contribute to the human sensation of depth. Binocular perspective disparity is notable as the strongest depth perception factor. However means for creating if artificially from flat image pairs are notorious for inducing physical and mental stresses, e.g., 'virtual reality sickness'. Aiming to deliver a less stressful 'kinder gentler stereo (KGS)', we systematically examine the secondary phenomena and their synergistic combination with each other and with binocular perspective disparity. By KGS we mean a stereo capture, rendering, and display paradigm without cue conflicts, without eyewear, without viewing zones, with negligible 'lock-in' time to perceive the image in depth, and with a normal appearance for stereo-deficient viewers. To achieve KGS we employ optical and digital image processing steps that introduce distortions contrary to strict 'geometrical correctness' of binocular perspective but which nevertheless result in increased stereoscopic viewing comfort. We particularly exploit the lower limits of interoccular separation, showing that unexpectedly small disparities stimulate accurate and pleasant depth sensations. Under these circumstances crosstalk is perceived as depth-of-focus rather than as ghosting. This suggests the possibility of radically new approaches to stereoview multiplexing that enable zoneless autostereoscopic display.

  17. Global stereo matching algorithm based on disparity range estimation

    Science.gov (United States)

    Li, Jing; Zhao, Hong; Gu, Feifei

    2017-09-01

    The global stereo matching algorithms are of high accuracy for the estimation of disparity map, but the time-consuming in the optimization process still faces a curse, especially for the image pairs with high resolution and large baseline setting. To improve the computational efficiency of the global algorithms, a disparity range estimation scheme for the global stereo matching is proposed to estimate the disparity map of rectified stereo images in this paper. The projective geometry in a parallel binocular stereo vision is investigated to reveal a relationship between two disparities at each pixel in the rectified stereo images with different baselines, which can be used to quickly obtain a predicted disparity map in a long baseline setting estimated by that in the small one. Then, the drastically reduced disparity ranges at each pixel under a long baseline setting can be determined by the predicted disparity map. Furthermore, the disparity range estimation scheme is introduced into the graph cuts with expansion moves to estimate the precise disparity map, which can greatly save the cost of computing without loss of accuracy in the stereo matching, especially for the dense global stereo matching, compared to the traditional algorithm. Experimental results with the Middlebury stereo datasets are presented to demonstrate the validity and efficiency of the proposed algorithm.

  18. Single-Camera-Based Method for Step Length Symmetry Measurement in Unconstrained Elderly Home Monitoring.

    Science.gov (United States)

    Cai, Xi; Han, Guang; Song, Xin; Wang, Jinkuan

    2017-11-01

    single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc. single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring

  19. Stereo-vision and 3D reconstruction for nuclear mobile robots

    International Nuclear Information System (INIS)

    Lecoeur-Taibi, I.; Vacherand, F.; Rivallin, P.

    1991-01-01

    In order to perceive the geometric structure of the surrounding environment of a mobile robot, a 3D reconstruction system has been developed. Its main purpose is to provide geometric information to an operator who has to telepilot the vehicle in a nuclear power plant. The perception system is split into two parts: the vision part and the map building part. Vision is enhanced with a fusion process that rejects bas samples over space and time. The vision is based on trinocular stereo-vision which provides a range image of the image contours. It performs line contour correlation on horizontal image pairs and vertical image pairs. The results are then spatially fused in order to have one distance image, with a quality independent of the orientation of the contour. The 3D reconstruction is based on grid-based sensor fusion. As the robot moves and perceives its environment, distance data is accumulated onto a regular square grid, taking into account the uncertainty of the sensor through a sensor measurement statistical model. This approach allows both spatial and temporal fusion. Uncertainty due to sensor position and robot position is also integrated into the absolute local map. This system is modular and generic and can integrate 2D laser range finder and active vision. (author)

  20. Application of stereo-imaging technology to medical field.

    Science.gov (United States)

    Nam, Kyoung Won; Park, Jeongyun; Kim, In Young; Kim, Kwang Gi

    2012-09-01

    There has been continuous development in the area of stereoscopic medical imaging devices, and many stereoscopic imaging devices have been realized and applied in the medical field. In this article, we review past and current trends pertaining to the application stereo-imaging technologies in the medical field. We describe the basic principles of stereo vision and visual issues related to it, including visual discomfort, binocular disparities, vergence-accommodation mismatch, and visual fatigue. We also present a brief history of medical applications of stereo-imaging techniques, examples of recently developed stereoscopic medical devices, and patent application trends as they pertain to stereo-imaging medical devices. Three-dimensional (3D) stereo-imaging technology can provide more realistic depth perception to the viewer than conventional two-dimensional imaging technology. Therefore, it allows for a more accurate understanding and analysis of the morphology of an object. Based on these advantages, the significance of stereoscopic imaging in the medical field increases in accordance with the increase in the number of laparoscopic surgeries, and stereo-imaging technology plays a key role in the diagnoses of the detailed morphologies of small biological specimens. The application of 3D stereo-imaging technology to the medical field will help improve surgical accuracy, reduce operation times, and enhance patient safety. Therefore, it is important to develop more enhanced stereoscopic medical devices.

  1. GOOSE: Semantic search on Internet connected sensors

    NARCIS (Netherlands)

    Schutte, K.; Bomhof, F.W.; Burghouts, G.J.; Diggelen, J. van; Hiemstra, P.; Hof, J. van 't; Kraaij, W.; Pasman, K.H.W.; Smith, A.J.E.; Versloot, C.A.; Wit, J.J. de

    2013-01-01

    More and more sensors are getting Internet connected. Examples are cameras on cell phones, CCTV cameras for traffic control as well as dedicated security and defense sensor systems. Due to the steadily increasing data volume, human exploitation of all this sensor data is impossible for effective

  2. Real-time stop sign detection and distance estimation using a single camera

    Science.gov (United States)

    Wang, Wenpeng; Su, Yuxuan; Cheng, Ming

    2018-04-01

    In modern world, the drastic development of driver assistance system has made driving a lot easier than before. In order to increase the safety onboard, a method was proposed to detect STOP sign and estimate distance using a single camera. In STOP sign detection, LBP-cascade classifier was applied to identify the sign in the image, and the principle of pinhole imaging was based for distance estimation. Road test was conducted using a detection system built with a CMOS camera and software developed by Python language with OpenCV library. Results shows that that the proposed system reach a detection accuracy of maximum of 97.6% at 10m, a minimum of 95.00% at 20m, and 5% max error in distance estimation. The results indicate that the system is effective and has the potential to be used in both autonomous driving and advanced driver assistance driving systems.

  3. Geometric Calibration and Radiometric Correction of the Maia Multispectral Camera

    Science.gov (United States)

    Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D.

    2017-10-01

    Multispectral imaging is a widely used remote sensing technique, whose applications range from agriculture to environmental monitoring, from food quality check to cultural heritage diagnostic. A variety of multispectral imaging sensors are available on the market, many of them designed to be mounted on different platform, especially small drones. This work focuses on the geometric and radiometric characterization of a brand-new, lightweight, low-cost multispectral camera, called MAIA. The MAIA camera is equipped with nine sensors, allowing for the acquisition of images in the visible and near infrared parts of the electromagnetic spectrum. Two versions are available, characterised by different set of band-pass filters, inspired by the sensors mounted on the WorlView-2 and Sentinel2 satellites, respectively. The camera details and the developed procedures for the geometric calibrations and radiometric correction are presented in the paper.

  4. Applications of iQID cameras

    Science.gov (United States)

    Han, Ling; Miller, Brian W.; Barrett, Harrison H.; Barber, H. Bradford; Furenlid, Lars R.

    2017-09-01

    iQID is an intensified quantum imaging detector developed in the Center for Gamma-Ray Imaging (CGRI). Originally called BazookaSPECT, iQID was designed for high-resolution gamma-ray imaging and preclinical gamma-ray single-photon emission computed tomography (SPECT). With the use of a columnar scintillator, an image intensifier and modern CCD/CMOS sensors, iQID cameras features outstanding intrinsic spatial resolution. In recent years, many advances have been achieved that greatly boost the performance of iQID, broadening its applications to cover nuclear and particle imaging for preclinical, clinical and homeland security settings. This paper presents an overview of the recent advances of iQID technology and its applications in preclinical and clinical scintigraphy, preclinical SPECT, particle imaging (alpha, neutron, beta, and fission fragment), and digital autoradiography.

  5. Image quality testing of assembled IR camera modules

    Science.gov (United States)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  6. Photographic zoom fisheye lens design for DSLR cameras

    Science.gov (United States)

    Yan, Yufeng; Sasian, Jose

    2017-09-01

    Photographic fisheye lenses with fixed focal length for cameras with different sensor formats have been well developed for decades. However, photographic fisheye lenses with variable focal length are rare on the market due in part to the greater design difficulty. This paper presents a large aperture zoom fisheye lens for DSLR cameras that produces both circular and diagonal fisheye imaging for 35-mm sensors and diagonal fisheye imaging for APS-C sensors. The history and optical characteristics of fisheye lenses are briefly reviewed. Then, a 9.2- to 16.1-mm F/2.8 to F/3.5 zoom fisheye lens design is presented, including the design approach and aberration control. Image quality and tolerance performance analysis for this lens are also presented.

  7. Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors

    Science.gov (United States)

    Dutton, Neale A. W.; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K.

    2016-01-01

    SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed. PMID:27447643

  8. Characteristics of a single photon emission tomography system with a wide field gamma camera

    International Nuclear Information System (INIS)

    Mathonnat, F.; Soussaline, F.; Todd-Pokropek, A.E.; Kellershohn, C.

    1979-01-01

    This text summarizes a work study describing the imagery possibilities of a single photon emission tomography system composed of a conventional wide field gamma camera, connected to a computer. The encouraging results achieved on the various phantoms studied suggest a significant development of this technique in clinical work in Nuclear Medicine Departments [fr

  9. Influence of Digital Camera Errors on the Photogrammetric Image Processing

    Science.gov (United States)

    Sužiedelytė-Visockienė, Jūratė; Bručas, Domantas

    2009-01-01

    The paper deals with the calibration of digital camera Canon EOS 350D, often used for the photogrammetric 3D digitalisation and measurements of industrial and construction site objects. During the calibration data on the optical and electronic parameters, influencing the distortion of images, such as correction of the principal point, focal length of the objective, radial symmetrical and non-symmetrical distortions were obtained. The calibration was performed by means of the Tcc software implementing the polynomial of Chebichev and using a special test-field with the marks, coordinates of which are precisely known. The main task of the research - to determine how parameters of the camera calibration influence the processing of images, i. e. the creation of geometric model, the results of triangulation calculations and stereo-digitalisation. Two photogrammetric projects were created for this task. In first project the non-corrected and in the second the corrected ones, considering the optical errors of the camera obtained during the calibration, images were used. The results of analysis of the images processing is shown in the images and tables. The conclusions are given.

  10. Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera

    Science.gov (United States)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2012-10-01

    In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.

  11. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  12. Calibration of Action Cameras for Photogrammetric Purposes

    Directory of Open Access Journals (Sweden)

    Caterina Balletti

    2014-09-01

    Full Text Available The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a easy to handle, (b capable of performing under extreme conditions and more importantly (c able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  13. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    Directory of Open Access Journals (Sweden)

    Basam Musleh

    2016-09-01

    Full Text Available Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels and the vehicle environment (meters depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  14. Acquisition of stereo panoramas for display in VR environments

    KAUST Repository

    Ainsworth, Richard A.

    2011-01-23

    Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods described here combine high-resolution photography with surround vision and full stereo view in an immersive environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented in all directions. The interocular distance, as seen from the viewer\\'s perspective, displaces the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking, even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo panoramas created with this acquisition and display technique can be applied without modification to a large array of VR devices having different screen arrangements and different VR libraries.

  15. Stereo perception of reconstructions of digital holograms of real-world objects

    Energy Technology Data Exchange (ETDEWEB)

    Lehtimaeki, Taina M; Saeaeskilahti, Kirsti; Naesaenen, Risto [University of Oulu, Oulu Southern Institute, Ylivieska (Finland); Naughton, Thomas J, E-mail: taina.lehtimaki@oulu.f [Department of Computer Science, National University of Ireland Maynooth (Ireland)

    2010-02-01

    In digital holography a 3D scene is captured optically and often the perspectives are reconstructed numerically. In this study we digitally process the holograms to allow them to be displayed on autostereoscopic displays. This study is conducted by subjective visual perception experiments comparing single reconstructed images from left and right perspective to the resulting stereo image.

  16. Stereo perception of reconstructions of digital holograms of real-world objects

    International Nuclear Information System (INIS)

    Lehtimaeki, Taina M; Saeaeskilahti, Kirsti; Naesaenen, Risto; Naughton, Thomas J

    2010-01-01

    In digital holography a 3D scene is captured optically and often the perspectives are reconstructed numerically. In this study we digitally process the holograms to allow them to be displayed on autostereoscopic displays. This study is conducted by subjective visual perception experiments comparing single reconstructed images from left and right perspective to the resulting stereo image.

  17. Development of a solid-state multi-sensor array camera for real time imaging of magnetic fields

    International Nuclear Information System (INIS)

    Benitez, D; Gaydecki, P; Quek, S; Torres, V

    2007-01-01

    The development of a real-time magnetic field imaging camera based on solid-state sensors is described. The final laboratory comprises a 2D array of 33 x 33 solid state, tri-axial magneto-inductive sensors, and is located within a large current-carrying coil. This may be excited to produce either a steady or time-varying magnetic field. Outputs from several rows of sensors are routed to a sub-master controller and all sub-masters route to a master-controller responsible for data coordination and signal pre-processing. The data are finally streamed to a host computer via a USB interface and the image generated and displayed at a rate of several frames per second. Accurate image generation is predicated on a knowledge of the sensor response, magnetic field perturbations and the nature of the target respecting permeability and conductivity. To this end, the development of the instrumentation has been complemented by extensive numerical modelling of field distribution patterns using boundary element methods. Although it was originally intended for deployment in the nondestructive evaluation (NDE) of reinforced concrete, it was soon realised during the course of the work that the magnetic field imaging system had many potential applications, for example, in medicine, security screening, quality assurance (such as the food industry), other areas of nondestructive evaluation (NDE), designs associated with magnetic fields, teaching and research

  18. Development of a solid-state multi-sensor array camera for real time imaging of magnetic fields

    Science.gov (United States)

    Benitez, D.; Gaydecki, P.; Quek, S.; Torres, V.

    2007-07-01

    The development of a real-time magnetic field imaging camera based on solid-state sensors is described. The final laboratory comprises a 2D array of 33 x 33 solid state, tri-axial magneto-inductive sensors, and is located within a large current-carrying coil. This may be excited to produce either a steady or time-varying magnetic field. Outputs from several rows of sensors are routed to a sub-master controller and all sub-masters route to a master-controller responsible for data coordination and signal pre-processing. The data are finally streamed to a host computer via a USB interface and the image generated and displayed at a rate of several frames per second. Accurate image generation is predicated on a knowledge of the sensor response, magnetic field perturbations and the nature of the target respecting permeability and conductivity. To this end, the development of the instrumentation has been complemented by extensive numerical modelling of field distribution patterns using boundary element methods. Although it was originally intended for deployment in the nondestructive evaluation (NDE) of reinforced concrete, it was soon realised during the course of the work that the magnetic field imaging system had many potential applications, for example, in medicine, security screening, quality assurance (such as the food industry), other areas of nondestructive evaluation (NDE), designs associated with magnetic fields, teaching and research.

  19. A model for measurement of noise in CCD digital-video cameras

    International Nuclear Information System (INIS)

    Irie, K; Woodhead, I M; McKinnon, A E; Unsworth, K

    2008-01-01

    This study presents a comprehensive measurement of CCD digital-video camera noise. Knowledge of noise detail within images or video streams allows for the development of more sophisticated algorithms for separating true image content from the noise generated in an image sensor. The robustness and performance of an image-processing algorithm is fundamentally limited by sensor noise. The individual noise sources present in CCD sensors are well understood, but there has been little literature on the development of a complete noise model for CCD digital-video cameras, incorporating the effects of quantization and demosaicing

  20. Cheetah: A high frame rate, high resolution SWIR image camera

    Science.gov (United States)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  1. The development of advanced robotics technology in high radiation environment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Cho, Jaiwan; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Lee, Jong Min; Park, Jin Suk; Kim, Seung Ho; Kim, Byung Soo; Moon, Byung Soo

    1997-07-01

    In the tele-operation technology using tele-presence in high radiation environment, stereo vision target tracking by centroid method, vergence control of stereo camera by moving vector method, stereo observing system by correlation method, horizontal moving axis stereo camera, and 3 dimensional information acquisition by stereo image is developed. Also, gesture image acquisition by computer vision and construction of virtual environment for remote work in nuclear power plant. In the development of intelligent control and monitoring technology for tele-robot in hazardous environment, the characteristics and principle of robot operation. And, robot end-effector tracking algorithm by centroid method and neural network method are developed for the observation and survey in hazardous environment. 3-dimensional information acquisition algorithm by structured light is developed. In the development of radiation hardened sensor technology, radiation-hardened camera module is designed and tested. And radiation characteristics of electric components is robot system is evaluated. Also 2-dimensional radiation monitoring system is developed. These advanced critical robot technology and telepresence techniques developed in this project can be applied to nozzle-dam installation /removal robot system, can be used to realize unmanned remotelization of nozzle-dam installation / removal task in steam generator of nuclear power plant, which can be contributed for people involved in extremely hazardous high radioactivity area to eliminate their exposure to radiation, enhance their task safety, and raise their working efficiency. (author). 75 refs., 21 tabs., 15 figs.

  2. The development of advanced robotics technology in high radiation environment

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Cho, Jaiwan; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Lee, Jong Min; Park, Jin Suk; Kim, Seung Ho; Kim, Byung Soo; Moon, Byung Soo.

    1997-07-01

    In the tele-operation technology using tele-presence in high radiation environment, stereo vision target tracking by centroid method, vergence control of stereo camera by moving vector method, stereo observing system by correlation method, horizontal moving axis stereo camera, and 3 dimensional information acquisition by stereo image is developed. Also, gesture image acquisition by computer vision and construction of virtual environment for remote work in nuclear power plant. In the development of intelligent control and monitoring technology for tele-robot in hazardous environment, the characteristics and principle of robot operation. And, robot end-effector tracking algorithm by centroid method and neural network method are developed for the observation and survey in hazardous environment. 3-dimensional information acquisition algorithm by structured light is developed. In the development of radiation hardened sensor technology, radiation-hardened camera module is designed and tested. And radiation characteristics of electric components is robot system is evaluated. Also 2-dimensional radiation monitoring system is developed. These advanced critical robot technology and telepresence techniques developed in this project can be applied to nozzle-dam installation /removal robot system, can be used to realize unmanned remotelization of nozzle-dam installation / removal task in steam generator of nuclear power plant, which can be contributed for people involved in extremely hazardous high radioactivity area to eliminate their exposure to radiation, enhance their task safety, and raise their working efficiency. (author). 75 refs., 21 tabs., 15 figs

  3. Stereo Vision and 3D Reconstruction on a Processor Network

    NARCIS (Netherlands)

    Paar, G.; Kuijpers, N.H.L.; Gasser, C.

    1996-01-01

    Surface measurements during outdòoor construction processes ar very costly whenever the measurement process interferes with the construction activities, since machine and man power resources are idle during the data acquisition procedure. Using frame cameras as sensors to provide a rneasurement data

  4. Planialtimetric Evaluation of a CARTOSAT-1 Stereo Pair - Case Study: SÃO SEBASTIÃO, SP, Brazil

    Science.gov (United States)

    Barros, R. S.; Cruz, C. B. M.; Rabaco, L. M. L.

    2012-07-01

    It is noticed a significant increase in the development of orbital and airborne sensors that enable the extraction of three-dimensional data. So, it's important the increment of studies about the quality of altimetric values derived from these sensors to verify if the improvements implemented in the acquisition of data may influence the results. In this context, as part of a larger project that aims to evaluate the accuracy of various sensors, this work aims to analysis the planialtimetric accuracy of DEM generated from Cartosat-1 stereo pair. The project was developed for an area near the city of São Sebastião, located in the basin of the North Coast of São Paulo state, in Brasil. The relief in this area is very steep, with a predominance of dense forest vegetation, typical of the Atlantic Forest. All points in this assessment have been established in the field, with the use of single frequency (L1) GNSS receivers, through static relative positioning. In this work it was considered the Brazilian standard specifications (PEC, in Portuguese) for classification of cartographic bases. Results may be considered very good and showed that Cartosat-1 orthoimage presents accuracy equivalent to class B for 1:10.000 scale. The DEM presents altimetric accuracy compatible with class A of the 1:25.000 scale. Results obtained are true for this specific area/study case, but may vary in case different scenes or other studies areas are considered.

  5. Miniature photometric stereo system for textile surface structure reconstruction

    Science.gov (United States)

    Gorpas, Dimitris; Kampouris, Christos; Malassiotis, Sotiris

    2013-04-01

    In this work a miniature photometric stereo system is presented, targeting the three-dimensional structural reconstruction of various fabric types. This is a supportive module to a robot system, attempting to solve the well known "laundry problem". The miniature device has been designed for mounting onto the robot gripper. It is composed of a low-cost off-the-shelf camera, operating in macro mode, and eight light emitting diodes. The synchronization between image acquisition and lighting direction is controlled by an Arduino Nano board and software triggering. The ambient light has been addressed by a cylindrical enclosure. The direction of illumination is recovered by locating the reflection or the brightest point on a mirror sphere, while a flatfielding process compensates for the non-uniform illumination. For the evaluation of this prototype, the classical photometric stereo methodology has been used. The preliminary results on a large number of textiles are very promising for the successful integration of the miniature module to the robot system. The required interaction with the robot is implemented through the estimation of the Brenner's focus measure. This metric successfully assesses the focus quality with reduced time requirements in comparison to other well accepted focus metrics. Besides the targeting application, the small size of the developed system makes it a very promising candidate for applications with space restrictions, like the quality control in industrial production lines or object recognition based on structural information and in applications where easiness in operation and light-weight are required, like those in the Biomedical field, and especially in dermatology.

  6. Laser Doppler perfusion imaging with a complimentary metal oxide semiconductor image sensor

    NARCIS (Netherlands)

    Serov, Alexander; Steenbergen, Wiendelt; de Mul, F.F.M.

    2002-01-01

    We utilized a complimentary metal oxide semiconductor video camera for fast f low imaging with the laser Doppler technique. A single sensor is used for both observation of the area of interest and measurements of the interference signal caused by dynamic light scattering from moving particles inside

  7. Hybrid-Based Dense Stereo Matching

    Science.gov (United States)

    Chuang, T. Y.; Ting, H. W.; Jaw, J. J.

    2016-06-01

    Stereo matching generating accurate and dense disparity maps is an indispensable technique for 3D exploitation of imagery in the fields of Computer vision and Photogrammetry. Although numerous solutions and advances have been proposed in the literature, occlusions, disparity discontinuities, sparse texture, image distortion, and illumination changes still lead to problematic issues and await better treatment. In this paper, a hybrid-based method based on semi-global matching is presented to tackle the challenges on dense stereo matching. To ease the sensitiveness of SGM cost aggregation towards penalty parameters, a formal way to provide proper penalty estimates is proposed. To this end, the study manipulates a shape-adaptive cross-based matching with an edge constraint to generate an initial disparity map for penalty estimation. Image edges, indicating the potential locations of occlusions as well as disparity discontinuities, are approved by the edge drawing algorithm to ensure the local support regions not to cover significant disparity changes. Besides, an additional penalty parameter 𝑃𝑒 is imposed onto the energy function of SGM cost aggregation to specifically handle edge pixels. Furthermore, the final disparities of edge pixels are found by weighting both values derived from the SGM cost aggregation and the U-SURF matching, providing more reliable estimates at disparity discontinuity areas. Evaluations on Middlebury stereo benchmarks demonstrate satisfactory performance and reveal the potency of the hybrid-based dense stereo matching method.

  8. Crater Morphometry and Crater Degradation on Mercury: Mercury Laser Altimeter (MLA) Measurements and Comparison to Stereo-DTM Derived Results

    Science.gov (United States)

    Leight, C.; Fassett, C. I.; Crowley, M. C.; Dyar, M. D.

    2017-01-01

    Two types of measurements of Mercury's surface topography were obtained by the MESSENGER (MErcury Surface Space ENvironment, GEochemisty and Ranging) spacecraft: laser ranging data from Mercury Laser Altimeter (MLA) [1], and stereo imagery from the Mercury Dual Imaging System (MDIS) camera [e.g., 2, 3]. MLA data provide precise and accurate elevation meaurements, but with sparse spatial sampling except at the highest northern latitudes. Digital terrain models (DTMs) from MDIS have superior resolution but with less vertical accuracy, limited approximately to the pixel resolution of the original images (in the case of [3], 15-75 m). Last year [4], we reported topographic measurements of craters in the D=2.5 to 5 km diameter range from stereo images and suggested that craters on Mercury degrade more quickly than on the Moon (by a factor of up to approximately 10×). However, we listed several alternative explanations for this finding, including the hypothesis that the lower depth/diameter ratios we observe might be a result of the resolution and accuracy of the stereo DTMs. Thus, additional measurements were undertaken using MLA data to examine the morphometry of craters in this diameter range and assess whether the faster crater degradation rates proposed to occur on Mercury is robust.

  9. Design of the first full size ATLAS ITk Strip sensor for the endcap region

    CERN Document Server

    Lacasta, Carlos; The ATLAS collaboration

    2017-01-01

    The ATLAS collaboration is designing the full silicon tracker (ITk) that will operate in the HL-LHC replacing the current design. The silicon microstrip sensors for the barrel and the endcap regions in the ITk are fabricated in 6 inch, p-type, float-zone wafers, where large-area strip sensor designs are laid out together with a number of miniature sensors. The radiation tolerance and specific system issues like the need for slim edge of 450 µm have been tested with square shaped sensors intended for the barrel part of the tracker. This work presents the design of the first full size silicon microstrip sensor for the endcap region with a slim edge of 450 µm. The strip endcaps will consist of several wheels with two layers of silicon strip sensors each. The strips have to lie along the azimuthal direction, apart from a small stereo angle rotation (20 mrad on each side, giving 40 mrad total) for measuring the second coordinate of tracks. This stereo angle is built into the strip layout of the sensor and, in or...

  10. Design of the first full size ATLAS ITk Strip sensor for the endcap region

    CERN Document Server

    Lacasta, Carlos; The ATLAS collaboration

    2018-01-01

    The ATLAS collaboration is designing the full silicon tracker (ITk) that will operate in the HL-LHC replacing the current design. The silicon microstrip sensors for the barrel and the endcap regions in the ITk are fabricated in 6 inch, p-type, float-zone wafers, where large-area strip sensor designs are laid out together with a number of miniature sensors. The radiation tolerance and specific system issues like the need for slim edge of 450 μm have been tested with square shaped sensors intended for the barrel part of the tracker. This work presents the design of the first full size silicon microstrip sensor for the endcap region with a slim edge of 450 μm. The strip endcaps will consist of several wheels with two layers of silicon strip sensors each. The strips have to lie along the azimuthal direction, apart from a small stereo angle rotation (20 mrad on each side, giving 40 mrad total) for measuring the second coordinate of tracks. This stereo angle is built into the strip layout of the sensor and, in or...

  11. Refractive index sensor based on an abrupt taper Michelson interferometer in a single-mode fiber.

    Science.gov (United States)

    Tian, Zhaobing; Yam, Scott S-H; Loock, Hans-Peter

    2008-05-15

    A simple refractive index sensor based on a Michelson interferometer in a single-mode fiber is constructed and demonstrated. The sensor consists of a single symmetrically abrupt taper region in a short piece of single-mode fiber that is terminated by approximately 500 nm thick gold coating. The sensitivity of the new sensor is similar to that of a long-period-grating-type sensor, and its ease of fabrication offers a low-cost alternative to current sensing applications.

  12. Relative and Absolute Calibration of a Multihead Camera System with Oblique and Nadir Looking Cameras for a Uas

    Science.gov (United States)

    Niemeyer, F.; Schima, R.; Grenzdörffer, G.

    2013-08-01

    Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.

  13. Realistic camera noise modeling with application to improved HDR synthesis

    Science.gov (United States)

    Goossens, Bart; Luong, Hiêp; Aelterman, Jan; Pižurica, Aleksandra; Philips, Wilfried

    2012-12-01

    Due to the ongoing miniaturization of digital camera sensors and the steady increase of the "number of megapixels", individual sensor elements of the camera become more sensitive to noise, even deteriorating the final image quality. To go around this problem, sophisticated processing algorithms in the devices, can help to maximally exploit the knowledge on the sensor characteristics (e.g., in terms of noise), and offer a better image reconstruction. Although a lot of research focuses on rather simplistic noise models, such as stationary additive white Gaussian noise, only limited attention has gone to more realistic digital camera noise models. In this article, we first present a digital camera noise model that takes several processing steps in the camera into account, such as sensor signal amplification, clipping, post-processing,.. We then apply this noise model to the reconstruction problem of high dynamic range (HDR) images from a small set of low dynamic range (LDR) exposures of a static scene. In literature, HDR reconstruction is mostly performed by computing a weighted average, in which the weights are directly related to the observer pixel intensities of the LDR image. In this work, we derive a Bayesian probabilistic formulation of a weighting function that is near-optimal in the MSE sense (or SNR sense) of the reconstructed HDR image, by assuming exponentially distributed irradiance values. We define the weighting function as the probability that the observed pixel intensity is approximately unbiased. The weighting function can be directly computed based on the noise model parameters, which gives rise to different symmetric and asymmetric shapes when electronic noise or photon noise is dominant. We also explain how to deal with the case that some of the noise model parameters are unknown and explain how the camera response function can be estimated using the presented noise model. Finally, experimental results are provided to support our findings.

  14. INNOVATIV AIRBORNE SENSORS FOR DISASTER MANAGEMENT

    Directory of Open Access Journals (Sweden)

    M. O. Altan

    2016-06-01

    Full Text Available Modern Disaster Management Systems are based on 3 columns, crisis preparedness, early warning and the final crisis management. In all parts, special data are needed in order to analyze existing structures, assist in the early warning system and in the updating after a disaster happens to assist the crises management organizations. How can new and innovative sensors assist in these tasks? Aerial images have been frequently used in the past for generating spatial data, however in urban structures not all information can be extracted easily. Modern Oblique camera systems already assist in the evaluation of building structures to define rescue paths, analyze building structures and give also information of the stability of the urban fabric. For this application there is no need of a high geometric accurate sensor, also SLC Camera based Oblique Camera system as the OI X5, which uses Nikon Cameras, do a proper job. Such a camera also delivers worth full information after a Disaster happens to validate the degree of deformation in order to estimate stability and usability for the population. Thermal data in combination with RGB give further information of the building structure, damages and potential water intrusion. Under development is an oblique thermal sensor with 9 heads which enables nadir and oblique thermal data acquisition. Beside the application for searching people, thermal anomalies can be created out of humidity in constructions (transpiration effects, damaged power lines, burning gas tubes and many other dangerous facts. A big task is in the data analysis which should be made automatically and fast. This requires a good initial orientation and a proper relative adjustment of the single sensors. Like that, many modern software tools enable a rapid data extraction. Automated analysis of the data before and after a disaster can highlight areas of significant changes. Detecting anomalies are the way to get the focus on the prior area. Also

  15. Innovativ Airborne Sensors for Disaster Management

    Science.gov (United States)

    Altan, M. O.; Kemper, G.

    2016-06-01

    Modern Disaster Management Systems are based on 3 columns, crisis preparedness, early warning and the final crisis management. In all parts, special data are needed in order to analyze existing structures, assist in the early warning system and in the updating after a disaster happens to assist the crises management organizations. How can new and innovative sensors assist in these tasks? Aerial images have been frequently used in the past for generating spatial data, however in urban structures not all information can be extracted easily. Modern Oblique camera systems already assist in the evaluation of building structures to define rescue paths, analyze building structures and give also information of the stability of the urban fabric. For this application there is no need of a high geometric accurate sensor, also SLC Camera based Oblique Camera system as the OI X5, which uses Nikon Cameras, do a proper job. Such a camera also delivers worth full information after a Disaster happens to validate the degree of deformation in order to estimate stability and usability for the population. Thermal data in combination with RGB give further information of the building structure, damages and potential water intrusion. Under development is an oblique thermal sensor with 9 heads which enables nadir and oblique thermal data acquisition. Beside the application for searching people, thermal anomalies can be created out of humidity in constructions (transpiration effects), damaged power lines, burning gas tubes and many other dangerous facts. A big task is in the data analysis which should be made automatically and fast. This requires a good initial orientation and a proper relative adjustment of the single sensors. Like that, many modern software tools enable a rapid data extraction. Automated analysis of the data before and after a disaster can highlight areas of significant changes. Detecting anomalies are the way to get the focus on the prior area. Also Lidar supports

  16. Dynamic Shape Capture of Free-Swimming Aquatic Life using Multi-view Stereo

    Science.gov (United States)

    Daily, David

    2017-11-01

    The reconstruction and tracking of swimming fish in the past has either been restricted to flumes, small volumes, or sparse point tracking in large tanks. The purpose of this research is to use an array of cameras to automatically track 50-100 points on the surface of a fish using the multi-view stereo computer vision technique. The method is non-invasive thus allowing the fish to swim freely in a large volume and to perform more advanced maneuvers such as rolling, darting, stopping, and reversing which have not been studied. The techniques for obtaining and processing the 3D kinematics and maneuvers of tuna, sharks, stingrays, and other species will be presented and compared. The National Aquarium and the Naval Undersea Warfare Center and.

  17. Autonomous pedestrian localization technique using CMOS camera sensors

    Science.gov (United States)

    Chun, Chanwoo

    2014-09-01

    We present a pedestrian localization technique that does not need infrastructure. The proposed angle-only measurement method needs specially manufactured shoes. Each shoe has two CMOS cameras and two markers such as LEDs attached on the inward side. The line of sight (LOS) angles towards the two markers on the forward shoe are measured using the two cameras on the other rear shoe. Our simulation results shows that a pedestrian walking down in a shopping mall wearing this device can be accurately guided to the front of a destination store located 100m away, if the floor plan of the mall is available.

  18. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-07-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  19. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2 0 deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  20. A fully packaged micromachined single crystalline resonant force sensor

    Energy Technology Data Exchange (ETDEWEB)

    Cavalloni, C.; Gnielka, M.; Berg, J. von [Kistler Instrumente AG, Winterthur (Switzerland); Haueis, M.; Dual, J. [ETH Zuerich, Inst. of Mechanical Systems, Zuerich (Switzerland); Buser, R. [Interstate Univ. of Applied Science Buchs, Buchs (Switzerland)

    2001-07-01

    In this work a fully packaged resonant force sensor for static load measurements is presented. The working principle is based on the shift of the resonance frequency in response to the applied load. The heart of the sensor, the resonant structure, is fabricated by micromachining using single crystalline silicon. To avoid creep and hysteresis and to minimize temperature induced stress the resonant structure is encapsulated using an all-in-silicon solution. This means that the load coupling, the excitation of the microresonator and the detection of the oscillation signal are integrated in only one single crystalline silicon chip. The chip is packaged into a specially designed housing made of steel which has been designed with respect to application in harsh environments. The unloaded sensor has an initial frequency of about 22,5 kHz. The sensitivity amounts to 26 Hz/N with a linearity error significantly less than 0,5%FSO. (orig.)

  1. Developing stereo image based robot control system

    Energy Technology Data Exchange (ETDEWEB)

    Suprijadi,; Pambudi, I. R.; Woran, M.; Naa, C. F; Srigutomo, W. [Department of Physics, FMIPA, InstitutTeknologi Bandung Jl. Ganesha No. 10. Bandung 40132, Indonesia supri@fi.itb.ac.id (Indonesia)

    2015-04-16

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  2. Dsm Based Orientation of Large Stereo Satellite Image Blocks

    Science.gov (United States)

    d'Angelo, P.; Reinartz, P.

    2012-07-01

    High resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on CARTOSAT-1 imagery is presented, with emphasis on fully automated georeferencing. The proposed system processes level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The RPC are derived from orbit and attitude information and have a much lower accuracy than the ground resolution of approximately 2.5 m. In order to use the images for orthorectification or DSM generation, an affine RPC correction is required. In this paper, GCP are automatically derived from lower resolution reference datasets (Landsat ETM+ Geocover and SRTM DSM). The traditional method of collecting the lateral position from a reference image and interpolating the corresponding height from the DEM ignores the higher lateral accuracy of the SRTM dataset. Our method avoids this drawback by using a RPC correction based on DSM alignment, resulting in improved geolocation of both DSM and ortho images. Scene based method and a bundle block adjustment based correction are developed and evaluated for a test site covering the nothern part of Italy, for which 405 Cartosat-1 Stereopairs are available. Both methods are tested against independent ground truth. Checks against this ground truth indicate a lateral error of 10 meters.

  3. Phase camera experiment for Advanced Virgo

    International Nuclear Information System (INIS)

    Agatsuma, Kazuhiro; Beuzekom, Martin van; Schaaf, Laura van der; Brand, Jo van den

    2016-01-01

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO 2 lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance. - Highlights: • The phase camera is being developed for a gravitational wave detector. • A scanner performance limits the operation speed and layout design of the system. • An operation range was found by measuring the frequency response of the scanner.

  4. Phase camera experiment for Advanced Virgo

    Energy Technology Data Exchange (ETDEWEB)

    Agatsuma, Kazuhiro, E-mail: agatsuma@nikhef.nl [National Institute for Subatomic Physics, Amsterdam (Netherlands); Beuzekom, Martin van; Schaaf, Laura van der [National Institute for Subatomic Physics, Amsterdam (Netherlands); Brand, Jo van den [National Institute for Subatomic Physics, Amsterdam (Netherlands); VU University, Amsterdam (Netherlands)

    2016-07-11

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO{sub 2} lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance. - Highlights: • The phase camera is being developed for a gravitational wave detector. • A scanner performance limits the operation speed and layout design of the system. • An operation range was found by measuring the frequency response of the scanner.

  5. STEREO PHOTO HYDROFEL, A PROCESS OF MAKING SAID STEREO PHOTO HYDROGEL, POLYMERS FOR USE IN MAKING SUCH HYDROGEL AND A PHARMACEUTICAL COMPRISING SAID POLYMERS

    NARCIS (Netherlands)

    Hiemstra, C.; Zhong, Zhiyuan; Feijen, Jan

    2008-01-01

    The Invention relates to a stereo photo hydrogel formed by stereo complexed and photo cross-linked polymers, which polymers comprise at least two types of polymers having at least one hydrophilic component, at least one hydrophobic mutually stereo complexing component, and at least one of the types

  6. A survey of camera error sources in machine vision systems

    Science.gov (United States)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  7. Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors.

    Science.gov (United States)

    Lee, Kwan Woo; Yoon, Hyo Sik; Song, Jong Min; Park, Kang Ryoung

    2018-03-23

    Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.

  8. Bring your own camera to the trap: An inexpensive, versatile, and portable triggering system tested on wild hummingbirds.

    Science.gov (United States)

    Rico-Guevara, Alejandro; Mickley, James

    2017-07-01

    The study of animals in the wild offers opportunities to collect relevant information on their natural behavior and abilities to perform ecologically relevant tasks. However, it also poses challenges such as accounting for observer effects, human sensory limitations, and the time intensiveness of this type of research. To meet these challenges, field biologists have deployed camera traps to remotely record animal behavior in the wild. Despite their ubiquity in research, many commercial camera traps have limitations, and the species and behavior of interest may present unique challenges. For example, no camera traps support high-speed video recording. We present a new and inexpensive camera trap system that increases versatility by separating the camera from the triggering mechanism. Our system design can pair with virtually any camera and allows for independent positioning of a variety of sensors, all while being low-cost, lightweight, weatherproof, and energy efficient. By using our specialized trigger and customized sensor configurations, many limitations of commercial camera traps can be overcome. We use this system to study hummingbird feeding behavior using high-speed video cameras to capture fast movements and multiple sensors placed away from the camera to detect small body sizes. While designed for hummingbirds, our application can be extended to any system where specialized camera or sensor features are required, or commercial camera traps are cost-prohibitive, allowing camera trap use in more research avenues and by more researchers.

  9. Winter precipitation particle size distribution measurement by Multi-Angle Snowflake Camera

    Science.gov (United States)

    Huang, Gwo-Jong; Kleinkort, Cameron; Bringi, V. N.; Notaroš, Branislav M.

    2017-12-01

    From the radar meteorology viewpoint, the most important properties for quantitative precipitation estimation of winter events are 3D shape, size, and mass of precipitation particles, as well as the particle size distribution (PSD). In order to measure these properties precisely, optical instruments may be the best choice. The Multi-Angle Snowflake Camera (MASC) is a relatively new instrument equipped with three high-resolution cameras to capture the winter precipitation particle images from three non-parallel angles, in addition to measuring the particle fall speed using two pairs of infrared motion sensors. However, the results from the MASC so far are usually presented as monthly or seasonally, and particle sizes are given as histograms, no previous studies have used the MASC for a single storm study, and no researchers use MASC to measure the PSD. We propose the methodology for obtaining the winter precipitation PSD measured by the MASC, and present and discuss the development, implementation, and application of the new technique for PSD computation based on MASC images. Overall, this is the first study of the MASC-based PSD. We present PSD MASC experiments and results for segments of two snow events to demonstrate the performance of our PSD algorithm. The results show that the self-consistency of the MASC measured single-camera PSDs is good. To cross-validate PSD measurements, we compare MASC mean PSD (averaged over three cameras) with the collocated 2D Video Disdrometer, and observe good agreements of the two sets of results.

  10. Experimental demonstration of a simple displacement sensor based on a bent single-mode–multimode–single-mode fiber structure

    International Nuclear Information System (INIS)

    Wu, Qiang; Semenova, Yuliya; Wang, Pengfei; Hatta, Agus Muhamad; Farrell, Gerald

    2011-01-01

    A simple displacement sensor based on a bent single-mode–multimode–single-mode (SMS) fiber structure is proposed and experimentally investigated. The sensor offers a wider displacement range, not limited by the risk of fiber breakage, as well as a three-fold increase in displacement sensitivity by comparison with a straight SMS structure sensor. This sensor can be interrogated by either an optical spectral analyzer (OSA) or a ratiometric interrogation system: (1) if interrogated by an OSA assuming a resolution of 1 pm, it has a sensitivity of 28.2 nm for a displacement measurement range from 0 to 280 µm; (2) if interrogated by a ratiometric interrogation system, it has worst and best case resolutions of 556 and 38 nm, respectively, for a displacement measurement range from 0 to 520 µm

  11. The research of autonomous obstacle avoidance of mobile robot based on multi-sensor integration

    Science.gov (United States)

    Zhao, Ming; Han, Baoling

    2016-11-01

    The object of this study is the bionic quadruped mobile robot. The study has proposed a system design plan for mobile robot obstacle avoidance with the binocular stereo visual sensor and the self-control 3D Lidar integrated with modified ant colony optimization path planning to realize the reconstruction of the environmental map. Because the working condition of a mobile robot is complex, the result of the 3D reconstruction with a single binocular sensor is undesirable when feature points are few and the light condition is poor. Therefore, this system integrates the stereo vision sensor blumblebee2 and the Lidar sensor together to detect the cloud information of 3D points of environmental obstacles. This paper proposes the sensor information fusion technology to rebuild the environment map. Firstly, according to the Lidar data and visual data on obstacle detection respectively, and then consider two methods respectively to detect the distribution of obstacles. Finally fusing the data to get the more complete, more accurate distribution of obstacles in the scene. Then the thesis introduces ant colony algorithm. It has analyzed advantages and disadvantages of the ant colony optimization and its formation cause deeply, and then improved the system with the help of the ant colony optimization to increase the rate of convergence and precision of the algorithm in robot path planning. Such improvements and integrations overcome the shortcomings of the ant colony optimization like involving into the local optimal solution easily, slow search speed and poor search results. This experiment deals with images and programs the motor drive under the compiling environment of Matlab and Visual Studio and establishes the visual 2.5D grid map. Finally it plans a global path for the mobile robot according to the ant colony algorithm. The feasibility and effectiveness of the system are confirmed by ROS and simulation platform of Linux.

  12. USING COMBINATION OF PLANAR AND HEIGHT FEATURES FOR DETECTING BUILT-UP AREAS FROM HIGH-RESOLUTION STEREO IMAGERY

    Directory of Open Access Journals (Sweden)

    F. Peng

    2017-09-01

    Full Text Available Within-class spectral variation and between-class spectral confusion in remotely sensed imagery degrades the performance of built-up area detection when using planar texture, shape, and spectral features. Terrain slope and building height are often used to optimize the results, but extracted from auxiliary data (e.g. LIDAR data, DSM. Moreover, the auxiliary data must be acquired around the same time as image acquisition. Otherwise, built-up area detection accuracy is affected. Stereo imagery incorporates both planar and height information unlike single remotely sensed images. Stereo imagery acquired by many satellites (e.g. Worldview-4, Pleiades-HR, ALOS-PRISM, and ZY-3 can be used as data source of identifying built-up areas. A new method of identifying high-accuracy built-up areas from stereo imagery is achieved by using a combination of planar and height features. The digital surface model (DSM and digital orthophoto map (DOM are first generated from stereo images. Then, height values of above-ground objects (e.g. buildings are calculated from the DSM, and used to obtain raw built-up areas. Other raw built-up areas are obtained from the DOM using Pantex and Gabor, respectively. Final high-accuracy built-up area results are achieved from these raw built-up areas using the decision level fusion. Experimental results show that accurate built-up areas can be achieved from stereo imagery. The height information used in the proposed method is derived from stereo imagery itself, with no need to require auxiliary height data (e.g. LIDAR data. The proposed method is suitable for spaceborne and airborne stereo pairs and triplets.

  13. Using Combination of Planar and Height Features for Detecting Built-Up Areas from High-Resolution Stereo Imagery

    Science.gov (United States)

    Peng, F.; Cai, X.; Tan, W.

    2017-09-01

    Within-class spectral variation and between-class spectral confusion in remotely sensed imagery degrades the performance of built-up area detection when using planar texture, shape, and spectral features. Terrain slope and building height are often used to optimize the results, but extracted from auxiliary data (e.g. LIDAR data, DSM). Moreover, the auxiliary data must be acquired around the same time as image acquisition. Otherwise, built-up area detection accuracy is affected. Stereo imagery incorporates both planar and height information unlike single remotely sensed images. Stereo imagery acquired by many satellites (e.g. Worldview-4, Pleiades-HR, ALOS-PRISM, and ZY-3) can be used as data source of identifying built-up areas. A new method of identifying high-accuracy built-up areas from stereo imagery is achieved by using a combination of planar and height features. The digital surface model (DSM) and digital orthophoto map (DOM) are first generated from stereo images. Then, height values of above-ground objects (e.g. buildings) are calculated from the DSM, and used to obtain raw built-up areas. Other raw built-up areas are obtained from the DOM using Pantex and Gabor, respectively. Final high-accuracy built-up area results are achieved from these raw built-up areas using the decision level fusion. Experimental results show that accurate built-up areas can be achieved from stereo imagery. The height information used in the proposed method is derived from stereo imagery itself, with no need to require auxiliary height data (e.g. LIDAR data). The proposed method is suitable for spaceborne and airborne stereo pairs and triplets.

  14. Quantum sensors based on single diamond defects

    International Nuclear Information System (INIS)

    Jelezko Fedor

    2014-01-01

    NV centers in diamond are promising sensors able to detect electric and magnetic fields at nanoscale. Here we report on the detection of biomolecules using magnetic noise induced by their electron and nuclear spins. Presented results show first steps towards establishing novel sensing technology for visualizing single proteins and study of their dynamics. (author)

  15. RELATIVE AND ABSOLUTE CALIBRATION OF A MULTIHEAD CAMERA SYSTEM WITH OBLIQUE AND NADIR LOOKING CAMERAS FOR A UAS

    Directory of Open Access Journals (Sweden)

    F. Niemeyer

    2013-08-01

    Full Text Available Numerous unmanned aerial systems (UAS are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis“ software and will give an overview of the results and experiences of test flights.

  16. Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor.

    Science.gov (United States)

    Hoang, Toan Minh; Baek, Na Rae; Cho, Se Woon; Kim, Ki Wan; Park, Kang Ryoung

    2017-10-28

    Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods.

  17. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    Science.gov (United States)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  18. Two-Phase Algorithm for Optimal Camera Placement

    Directory of Open Access Journals (Sweden)

    Jun-Woo Ahn

    2016-01-01

    Full Text Available As markers for visual sensor networks have become larger, interest in the optimal camera placement problem has continued to increase. The most featured solution for the optimal camera placement problem is based on binary integer programming (BIP. Due to the NP-hard characteristic of the optimal camera placement problem, however, it is difficult to find a solution for a complex, real-world problem using BIP. Many approximation algorithms have been developed to solve this problem. In this paper, a two-phase algorithm is proposed as an approximation algorithm based on BIP that can solve the optimal camera placement problem for a placement space larger than in current studies. This study solves the problem in three-dimensional space for a real-world structure.

  19. A TV camera system for digitizing single shot oscillograms at sweep rate of 0.1 ns/cm

    International Nuclear Information System (INIS)

    Kienlen, M.; Knispel, G.; Miehe, J.A.; Sipp, B.

    1976-01-01

    A TV camera digitizing system associated with a 5 GHz photocell-oscilloscope apparatus allows the digitizing of single shot oscillograms; with an oscilloscope sweep rate of 0.1 ns/cm an accuracy on time measurements of 4 ps is obtained [fr

  20. Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors.

    Science.gov (United States)

    Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin

    2018-04-03

    Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison.

  1. A direct-view customer-oriented digital holographic camera

    Science.gov (United States)

    Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.

    2018-01-01

    In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.

  2. Convolutional Neural Network-Based Classification of Driver’s Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors

    Directory of Open Access Journals (Sweden)

    Kwan Woo Lee

    2018-03-01

    Full Text Available Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG or electrocardiogram (ECG sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver’s body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver’s emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN-based method of detecting emotion to identify aggressive driving using input images of the driver’s face, obtained using near-infrared (NIR light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed driving. Our proposed method demonstrates better performance than existing methods.

  3. Multi-sensors multi-baseline mapping system for mobile robot using stereovision camera and laser-range device

    Directory of Open Access Journals (Sweden)

    Mohammed Faisal

    2016-06-01

    Full Text Available Countless applications today are using mobile robots, including autonomous navigation, security patrolling, housework, search-and-rescue operations, material handling, manufacturing, and automated transportation systems. Regardless of the application, a mobile robot must use a robust autonomous navigation system. Autonomous navigation remains one of the primary challenges in the mobile-robot industry; many control algorithms and techniques have been recently developed that aim to overcome this challenge. Among autonomous navigation methods, vision-based systems have been growing in recent years due to rapid gains in computational power and the reliability of visual sensors. The primary focus of research into vision-based navigation is to allow a mobile robot to navigate in an unstructured environment without collision. In recent years, several researchers have looked at methods for setting up autonomous mobile robots for navigational tasks. Among these methods, stereovision-based navigation is a promising approach for reliable and efficient navigation. In this article, we create and develop a novel mapping system for a robust autonomous navigation system. The main contribution of this article is the fuse of the multi-baseline stereovision (narrow and wide baselines and laser-range reading data to enhance the accuracy of the point cloud, to reduce the ambiguity of correspondence matching, and to extend the field of view of the proposed mapping system to 180°. Another contribution is the pruning the region of interest of the three-dimensional point clouds to reduce the computational burden involved in the stereo process. Therefore, we called the proposed system multi-sensors multi-baseline mapping system. The experimental results illustrate the robustness and accuracy of the proposed system.

  4. An Omnidirectional Stereo Vision-Based Smart Wheelchair

    Directory of Open Access Journals (Sweden)

    Yutaka Satoh

    2007-06-01

    Full Text Available To support safe self-movement of the disabled and the aged, we developed an electric wheelchair that realizes the functions of detecting both the potential hazards in a moving environment and the postures and gestures of a user by equipping an electric wheelchair with the stereo omnidirectional system (SOS, which is capable of acquiring omnidirectional color image sequences and range data simultaneously in real time. The first half of this paper introduces the SOS and the basic technology behind it. To use the multicamera system SOS on an electric wheelchair, we developed an image synthesizing method of high speed and high quality and the method of recovering SOS attitude changes by using attitude sensors is also introduced. This method allows the SOS to be used without being affected by the mounting attitude of the SOS. The second half of this paper introduces the prototype electric wheelchair actually manufactured and experiments conducted using the prototype. The usability of the electric wheelchair is also discussed.

  5. Airborne multispectral identification of individual cotton plants using consumer-grade cameras

    Science.gov (United States)

    Although multispectral remote sensing using consumer-grade cameras has successfully identified fields of small cotton plants, improvements to detection sensitivity are needed to identify individual or small clusters of plants. The imaging sensor of consumer-grade cameras are based on a Bayer patter...

  6. Multi-Functional Measurement Using a Single FBG Sensor

    NARCIS (Netherlands)

    Mizutani, Y.; Groves, R.M.

    2011-01-01

    This paper describes the measurement of average strain, strain distribution and vibration of a cantilever beam made of Carbon Fiber Reinforced Plastics (CFRP), using a single Fibre Bragg Grating (FBG) sensor mounted on the beam surface. Average strain is determined from the displacement of the peak

  7. Target-Tracking Camera for a Metrology System

    Science.gov (United States)

    Liebe, Carl; Bartman, Randall; Chapsky, Jacob; Abramovici, Alexander; Brown, David

    2009-01-01

    An analog electronic camera that is part of a metrology system measures the varying direction to a light-emitting diode that serves as a bright point target. In the original application for which the camera was developed, the metrological system is used to determine the varying relative positions of radiating elements of an airborne synthetic aperture-radar (SAR) antenna as the airplane flexes during flight; precise knowledge of the relative positions as a function of time is needed for processing SAR readings. It has been common metrology system practice to measure the varying direction to a bright target by use of an electronic camera of the charge-coupled-device or active-pixel-sensor type. A major disadvantage of this practice arises from the necessity of reading out and digitizing the outputs from a large number of pixels and processing the resulting digital values in a computer to determine the centroid of a target: Because of the time taken by the readout, digitization, and computation, the update rate is limited to tens of hertz. In contrast, the analog nature of the present camera makes it possible to achieve an update rate of hundreds of hertz, and no computer is needed to determine the centroid. The camera is based on a position-sensitive detector (PSD), which is a rectangular photodiode with output contacts at opposite ends. PSDs are usually used in triangulation for measuring small distances. PSDs are manufactured in both one- and two-dimensional versions. Because it is very difficult to calibrate two-dimensional PSDs accurately, the focal-plane sensors used in this camera are two orthogonally mounted one-dimensional PSDs.

  8. An optical, electrical and ultrasonic layered single sensor for ingredient measurement in liquid

    International Nuclear Information System (INIS)

    Kimoto, A; Kitajima, T

    2010-01-01

    In this paper, an optical, electrical and ultrasonic layered single sensor is proposed as a new, non-invasive sensing method for the measurement of ingredients in liquid, particularly in the food industry. In the proposed sensor, the photo sensors and the PVDF films with the transparent conductive electrode are layered and the optical properties of the liquid are measured by a light emitting diode (LED) and a phototransistor (PT). In addition, the electrical properties are measured by indium tin oxide (ITO) film electrodes as the transparent conductive electrodes of PVDF films arranged on the surfaces of the LED and PT. Moreover, the ultrasonic properties are measured by PVDF films. Thus, the optical, electrical and ultrasonic properties in the same space of the liquid can be simultaneously measured at a single sensor. To test the sensor experimentally, three parameters of the liquid—such as concentrations of yellow color, sodium chloride (NaCl) and ethanol in distilled water—were estimated using the measurement values of the optical, electrical and ultrasonic properties obtained with the proposed sensor. The results suggested that it is possible to estimate the three ingredient concentrations in the same space of the liquid from the optical, electrical and ultrasonic properties measured by the proposed single sensor, although there are still some problems such as measurement accuracy that must be solved

  9. An electrically tunable plenoptic camera using a liquid crystal microlens array

    International Nuclear Information System (INIS)

    Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng

    2015-01-01

    Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF

  10. Video Surveillance using a Multi-Camera Tracking and Fusion System

    OpenAIRE

    Zhang , Zhong; Scanlon , Andrew; Yin , Weihong; Yu , Li; Venetianer , Péter L.

    2008-01-01

    International audience; Usage of intelligent video surveillance (IVS) systems is spreading rapidly. These systems are being utilized in a wide range of applications. In most cases, even in multi-camera installations, the video is processed independently in each feed. This paper describes a system that fuses tracking information from multiple cameras, thus vastly expanding its capabilities. The fusion relies on all cameras being calibrated to a site map, while the individual sensors remain lar...

  11. A Single Camera Motion Capture System for Human-Computer Interaction

    Science.gov (United States)

    Okada, Ryuzo; Stenger, Björn

    This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.

  12. COMPARISON OF DIGITAL SURFACE MODELS FOR SNOW DEPTH MAPPING WITH UAV AND AERIAL CAMERAS

    Directory of Open Access Journals (Sweden)

    R. Boesch

    2016-06-01

    Full Text Available Photogrammetric workflows for aerial images have improved over the last years in a typically black-box fashion. Most parameters for building dense point cloud are either excessive or not explained and often the progress between software releases is poorly documented. On the other hand, development of better camera sensors and positional accuracy of image acquisition is significant by comparing product specifications. This study shows, that hardware evolutions over the last years have a much stronger impact on height measurements than photogrammetric software releases. Snow height measurements with airborne sensors like the ADS100 and UAV-based DSLR cameras can achieve accuracies close to GSD * 2 in comparison with ground-based GNSS reference measurements. Using a custom notch filter on the UAV camera sensor during image acquisition does not yield better height accuracies. UAV based digital surface models are very robust. Different workflow parameter variations for ADS100 and UAV camera workflows seem to have only random effects.

  13. Patient positioning in radiotherapy based on surface imaging using time of flight cameras

    Energy Technology Data Exchange (ETDEWEB)

    Gilles, M., E-mail: marlene.gilles@univ-brest.fr; Fayad, H.; Clement, J. F.; Bert, J.; Visvikis, D. [INSERM, UMR 1101, LaTIM, Brest 29609 (France); Miglierini, P. [Academic Radiotherapy Department, CHRU Morvan, Brest 29200 (France); Scheib, S. [Varian Medical Systems Imaging Laboratory GmbH, Baden-Daettwil 5405 (Switzerland); Cozzi, L. [Radiotherapy and Radiosurgery Department, Instituto Clinico Humanitas, Rozzano 20089 (Italy); Boussion, N.; Schick, U.; Pradier, O. [INSERM, UMR 1101, LaTIM, Brest 29609, France and Academic Radiotherapy Department, CHRU Morvan, Brest 29200 (France)

    2016-08-15

    Purpose: To evaluate the patient positioning accuracy in radiotherapy using a stereo-time of flight (ToF)-camera system. Methods: A system using two ToF cameras was used to scan the surface of the patients in order to position them daily on the treatment couch. The obtained point clouds were registered to (a) detect translations applied to the table (intrafraction motion) and (b) predict the displacement to be applied in order to place the patient in its reference position (interfraction motion). The measures provided by this system were compared to the effectively applied translations. The authors analyzed 150 fractions including lung, pelvis/prostate, and head and neck cancer patients. Results: The authors obtained small absolute errors for displacement detection: 0.8 ± 0.7, 0.8 ± 0.7, and 0.7 ± 0.6 mm along the vertical, longitudinal, and lateral axes, respectively, and 0.8 ± 0.7 mm for the total norm displacement. Lung cancer patients presented the largest errors with a respective mean of 1.1 ± 0.9, 0.9 ± 0.9, and 0.8 ± 0.7 mm. Conclusions: The proposed stereo-ToF system allows for sufficient accuracy and faster patient repositioning in radiotherapy. Its capability to track the complete patient surface in real time could allow, in the future, not only for an accurate positioning but also a real time tracking of any patient intrafraction motion (translation, involuntary, and breathing).

  14. Ultra-fast Sensor for Single-photon Detection in a Wide Range of the Electromagnetic Spectrum

    Directory of Open Access Journals (Sweden)

    Astghik KUZANYAN

    2016-12-01

    Full Text Available The results of computer simulation of heat distribution processes taking place after absorption of single photons of 1 eV-1 keV energy in three-layer sensor of the thermoelectric detector are being analyzed. Different geometries of the sensor with tungsten absorber, thermoelectric layer of cerium hexaboride and tungsten heat sink are considered. It is shown that by changing the sizes of the sensor layers it is possible to obtain transducers for registration of photons within the given spectral range with required energy resolution and count rate. It is concluded that, as compared to the single layer sensor, the thee-layer sensor has a number of advantages and demonstrate characteristics that make possible to consider the thermoelectric detector as a real alternative to superconducting single photon detectors.

  15. Unscented Kalman filtering for articulated human tracking

    DEFF Research Database (Denmark)

    Boesen Lindbo Larsen, Anders; Hauberg, Søren; Pedersen, Kim Steenstrup

    2011-01-01

    We present an articulated tracking system working with data from a single narrow baseline stereo camera. The use of stereo data allows for some depth disambiguation, a common issue in articulated tracking, which in turn yields likelihoods that are practically unimodal. While current state...... with superior results. Tracking quality is measured by comparing with ground truth data from a marker-based motion capture system....

  16. A mobile device-based imaging spectrometer for environmental monitoring by attaching a lightweight small module to a commercial digital camera.

    Science.gov (United States)

    Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing

    2017-11-15

    Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.

  17. Automatic camera to laser calibration for high accuracy mobile mapping systems using INS

    Science.gov (United States)

    Goeman, Werner; Douterloigne, Koen; Gautama, Sidharta

    2013-09-01

    A mobile mapping system (MMS) is a mobile multi-sensor platform developed by the geoinformation community to support the acquisition of huge amounts of geodata in the form of georeferenced high resolution images and dense laser clouds. Since data fusion and data integration techniques are increasingly able to combine the complementary strengths of different sensor types, the external calibration of a camera to a laser scanner is a common pre-requisite on today's mobile platforms. The methods of calibration, nevertheless, are often relatively poorly documented, are almost always time-consuming, demand expert knowledge and often require a carefully constructed calibration environment. A new methodology is studied and explored to provide a high quality external calibration for a pinhole camera to a laser scanner which is automatic, easy to perform, robust and foolproof. The method presented here, uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration, a well studied absolute orientation problem needs to be solved. In many cases, the camera and laser sensor are calibrated in relation to the INS system. Therefore, the transformation from camera to laser contains the cumulated error of each sensor in relation to the INS. Here, the calibration of the camera is performed in relation to the laser frame using the time synchronization between the sensors for data association. In this study, the use of the inertial relative movement will be explored to collect more useful calibration data. This results in a better intersensor calibration allowing better coloring of the clouds and a more accurate depth mask for images, especially on the edges of objects in the scene.

  18. Single camera analyses in studying pattern forming dynamics of player interactions in team sports.

    OpenAIRE

    Duarte, Ricardo; Fernandes, Orlando; Folgado, Hugo; Araújo, Duarte

    2013-01-01

    A network of patterned interactions between players characterises team ball sports. Thus, interpersonal coordination patterns are an important topic in the study of performance in such sports. A very useful method has been the study of inter-individual interactions captured by a single camera filming an extended performance area. The appropriate collection of positional data allows investigating the pattern forming dynamics emerging in different performance sub-phases of team ball sports. Thi...

  19. 'McMurdo' Panorama from Spirit's 'Winter Haven' (Color Stereo)

    Science.gov (United States)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA01905 [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA01905 This 360-degree view, called the 'McMurdo' panorama, comes from the panoramic camera (Pancam) on NASA's Mars Exploration Rover Spirit. From April through October 2006, Spirit has stayed on a small hill known as 'Low Ridge.' There, the rover's solar panels are tilted toward the sun to maintain enough solar power for Spirit to keep making scientific observations throughout the winter on southern Mars. This view of the surroundings from Spirit's 'Winter Haven' is presented as a stereo anaglyph to show the scene three-dimensionally when viewed through red-blue glasses (with the red lens on the left). Oct. 26, 2006, marks Spirit's 1,000th sol of what was planned as a 90-sol mission. (A sol is a Martian day, which lasts 24 hours, 39 minutes, 35 seconds). The rover has lived through the most challenging part of its second Martian winter. Its solar power levels are rising again. Spring in the southern hemisphere of Mars will begin in early 2007. Before that, the rover team hopes to start driving Spirit again toward scientifically interesting places in the 'Inner Basin' and 'Columbia Hills' inside Gusev crater. The McMurdo panorama is providing team members with key pieces of scientific and topographic information for choosing where to continue Spirit's exploration adventure. The Pancam began shooting component images of this panorama during Spirit's sol 814 (April 18, 2006) and completed the part shown here on sol 932 (Aug. 17, 2006). The panorama was acquired using all 13 of the Pancam's color filters, using lossless compression for the red and blue stereo filters, and only modest levels of compression on the remaining filters. The overall panorama consists of 1,449 Pancam images and represents a raw data volume of nearly 500 megabytes. It is thus the largest, highest-fidelity view of Mars

  20. An Evaluation of the Effectiveness of Stereo Slides in Teaching Geomorphology.

    Science.gov (United States)

    Giardino, John R.; Thornhill, Ashton G.

    1984-01-01

    Provides information about producing stereo slides and their use in the classroom. Describes an evaluation of the teaching effectiveness of stereo slides using two groups of 30 randomly selected students from introductory geomorphology. Results from a pretest/postttest measure show that stereo slides significantly improved understanding. (JM)