WorldWideScience

Sample records for single-sensor stereo camera

  1. Stereo Pinhole Camera: Assembly and experimental activities

    OpenAIRE

    Santos, Gilmário Barbosa; Departamento de Ciência da Computação, Universidade do Estado de Santa Catarina, Joinville; Cunha, Sidney Pinto; Centro de Tecnologia da Informação Renato Archer, Campinas

    2015-01-01

    This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Fur...

  2. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  3. Stereo Pinhole Camera: Assembly and experimental activities

    Directory of Open Access Journals (Sweden)

    Gilmário Barbosa Santos

    2015-05-01

    Full Text Available This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Furthermore, experiments are proposed by using the images obtained by the camera for 3D visualization through a pair of anaglyph glasses, and the estimation of relative depth by triangulation is discussed.

  4. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Gagliano, L.; Bryan, T.; MacLeod, T.

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASAs Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  5. Calibration of a Stereo Radiation Detection Camera Using Planar Homography

    Directory of Open Access Journals (Sweden)

    Seung-Hae Baek

    2016-01-01

    Full Text Available This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.

  6. Adaptive control of camera position for stereo vision

    Science.gov (United States)

    Crisman, Jill D.; Cleary, Michael E.

    1994-03-01

    A major problem in using two-camera stereo machine vision to perform real-world tasks, such as visual object tracking, is deciding where to position the cameras. Humans accomplish the analogous task by positioning their heads and eyes for optimal stereo effects. This paper describes recent work toward developing automated control strategies for camera motion in stereo machine vision systems for mobile robot navigation. Our goal is to achieve fast, reliable pursuit of a target while avoiding obstacles. Our strategy results in smooth, stable camera motion despite robot and target motion. Our algorithm has been shown to be successful at navigating a mobile robot, mediating visual target tracking and ultrasonic obstacle detection. The architecture, hardware, and simulation results are discussed.

  7. Stereo Calibration and Rectification for Omnidirectional Multi-Camera Systems

    Directory of Open Access Journals (Sweden)

    Yanchang Wang

    2012-10-01

    Full Text Available Stereo vision has been studied for decades as a fundamental problem in the field of computer vision. In recent years, computer vision and image processing with a large field of view, especially using omnidirectional vision and panoramic images, has been receiving increasing attention. An important problem for stereo vision is calibration. Although various kinds of calibration methods for omnidirectional cameras are proposed, most of them are limited to calibrate catadioptric cameras or fish-eye cameras and cannot be applied directly to multi-camera systems. In this work, we propose an easy calibration method with closed-form initialization and iterative optimization for omnidirectional multi-camera systems. The method only requires image pairs of the 2D target plane in a few different views. A method based on the spherical camera model is also proposed for rectifying omnidirectional stereo pairs. Using real data captured by Ladybug3, we carry out some experiments, including stereo calibration, rectification and 3D reconstruction. Statistical analyses and comparisons of the experimental results are also presented. As the experimental results show, the calibration results are precise and the effect of rectification is promising.

  8. Visible Watermarking Technique Based on Human Visual System for Single Sensor Digital Cameras

    Directory of Open Access Journals (Sweden)

    Hector Santoyo-Garcia

    2017-01-01

    Full Text Available In this paper we propose a visible watermarking algorithm, in which a visible watermark is embedded into the Bayer Colour Filter Array (CFA domain. The Bayer CFA is the most common raw image representation for images captured by single sensor digital cameras equipped in almost all mobile devices. In proposed scheme, the captured image is watermarked before it is compressed and stored in the storage system. Then this method enforces the rightful ownership of the watermarked image, since there is no other version of the image rather than the watermarked one. We also take into consideration the Human Visual System (HVS so that the proposed technique provides desired characteristics of a visible watermarking scheme, such that the embedded watermark is sufficiently perceptible and at same time not obtrusive in colour and grey-scale images. Unlike other Bayer CFA domain visible watermarking algorithms, in which only binary watermark pattern is supported, proposed watermarking algorithm allows grey-scale and colour images as watermark patterns. It is suitable for advertisement purpose, such as digital library and e-commerce, besides copyright protection.

  9. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    Science.gov (United States)

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  10. People counting with stereo cameras : two template-based solutions

    NARCIS (Netherlands)

    Englebienne, Gwenn; van Oosterhout, Tim; Kröse, B.J.A.

    2012-01-01

    People counting is a challenging task with many applications. We propose a method with a fixed stereo camera that is based on projecting a template onto the depth image. The method was tested on a challenging outdoor dataset with good results and runs in real time.

  11. Multispectral imaging using a stereo camera: concept, design and assessment

    Directory of Open Access Journals (Sweden)

    Mansouri Alamin

    2011-01-01

    Full Text Available Abstract This paper proposes a one-shot six-channel multispectral color image acquisition system using a stereo camera and a pair of optical filters. The two filters from the best pair, selected from among readily available filters such that they modify the sensitivities of the two cameras in such a way that they produce optimal estimation of spectral reflectance and/or color, are placed in front of the two lenses of the stereo camera. The two images acquired from the stereo camera are then registered for pixel-to-pixel correspondence. The spectral reflectance and/or color at each pixel on the scene are estimated from the corresponding camera outputs in the two images. Both simulations and experiments have shown that the proposed system performs well both spectrally and colorimetrically. Since it acquires the multispectral images in one shot, the proposed system can solve the limitations of slow and complex acquisition process, and costliness of the state of the art multispectral imaging systems, leading to its possible uses in widespread applications.

  12. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2017-01-01

    Any exploration vehicle assembled or Spacecraft placed in LEO or GTO must pass through this debris cloud and survive. Large cross section, low thrust vehicles will spend more time spiraling out through the cloud and will suffer more impacts.Better knowledge of small debris will improve survival odds. Current estimated Density of debris at various orbital attitudes with notation of recent collisions and resulting spikes. Orbital Debris Tracking and Characterization has now been added to NASA Office of Chief Technologists Technology Development Roadmap in Technology Area 5 (TA5.7)[Orbital Debris Tracking and Characterization] and is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crews due to the risk of Orbital Debris damage to ISS Exploration vehicles. The Problem: Traditional orbital trackers looking for small, dim orbital derelicts and debris typically will stare at the stars and let any reflected light off the debris integrate in the imager for seconds, thus creating a streak across the image. The Solution: The Small Tracker will see Stars and other celestial objects rise through its Field of View (FOV) at the rotational rate of its orbit, but the glint off of orbital objects will move through the FOV at different rates and directions. Debris on a head-on collision course (or close) will stay in the FOV at 14 Km per sec. The Small Tracker can track at 60 frames per sec allowing up to 30 fixes before a near-miss pass. A Stereo pair of Small Trackers can provide range data within 5-7 Km for better orbit measurements.

  13. Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras

    Directory of Open Access Journals (Sweden)

    Miklas S. Kristoffersen

    2016-01-01

    Full Text Available The number of pedestrians walking the streets or gathered in public spaces is a valuable piece of information for shop owners, city governments, event organizers and many others. However, automatic counting that takes place day and night is challenging due to changing lighting conditions and the complexity of scenes with many people occluding one another. To address these challenges, this paper introduces the use of a stereo thermal camera setup for pedestrian counting. We investigate the reconstruction of 3D points in a pedestrian street with two thermal cameras and propose an algorithm for pedestrian counting based on clustering and tracking of the 3D point clouds. The method is tested on two five-minute video sequences captured at a public event with a moderate density of pedestrians and heavy occlusions. The counting performance is compared to the manually annotated ground truth and shows success rates of 95.4% and 99.1% for the two sequences.

  14. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    Science.gov (United States)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  15. Effect of camera temperature variations on stereo-digital image correlation measurements

    KAUST Repository

    Pan, Bing

    2015-11-25

    In laboratory and especially non-laboratory stereo-digital image correlation (stereo-DIC) applications, the extrinsic and intrinsic parameters of the cameras used in the system may change slightly due to the camera warm-up effect and possible variations in ambient temperature. Because these camera parameters are generally calibrated once prior to measurements and considered to be unaltered during the whole measurement period, the changes in these parameters unavoidably induce displacement/strain errors. In this study, the effect of temperature variations on stereo-DIC measurements is investigated experimentally. To quantify the errors associated with camera or ambient temperature changes, surface displacements and strains of a stationary optical quartz glass plate with near-zero thermal expansion were continuously measured using a regular stereo-DIC system. The results confirm that (1) temperature variations in the cameras and ambient environment have a considerable influence on the displacements and strains measured by stereo-DIC due to the slightly altered extrinsic and intrinsic camera parameters; and (2) the corresponding displacement and strain errors correlate with temperature changes. For the specific stereo-DIC configuration used in this work, the temperature-induced strain errors were estimated to be approximately 30–50 με/°C. To minimize the adverse effect of camera temperature variations on stereo-DIC measurements, two simple but effective solutions are suggested.

  16. Towards the Influence of a CAR Windshield on Depth Calculation with a Stereo Camera System

    Science.gov (United States)

    Hanel, A.; Hoegner, L.; Stilla, U.

    2016-06-01

    Stereo camera systems in cars are often used to estimate the distance of other road users from the car. This information is important to improve road safety. Such camera systems are typically mounted behind the windshield of the car. In this contribution, the influence of the windshield on the estimated distance values is analyzed. An offline stereo camera calibration is performed with a moving planar calibration target. In a standard procedure bundle adjustment, the relative orientation of the cameras is estimated. The calibration is performed for the identical stereo camera system with and without a windshield in between. The base lengths are derived from the relative orientation in both cases and are compared. Distance values are calculated and analyzed. It can be shown, that the difference of the base length values in the two cases is highly significant. Resulting effects on the distance calculation up to a half meter occur.

  17. Augmented reality glass-free three-dimensional display with the stereo camera

    Science.gov (United States)

    Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.

  18. Stereo matching based on SIFT descriptor with illumination and camera invariance

    Science.gov (United States)

    Niu, Haitao; Zhao, Xunjie; Li, Chengjin; Peng, Xiang

    2010-10-01

    Stereo matching is the process of finding corresponding points in two or more images. The description of interest points is a critical aspect of point correspondence which is vital in stereo matching. SIFT descriptor has been proven to be better on the distinctiveness and robustness than other local descriptors. However, SIFT descriptor does not involve color information of feature point which provides powerfully distinguishable feature in matching tasks. Furthermore, in a real scene, image color are affected by various geometric and radiometric factors,such as gamma correction and exposure. These situations are very common in stereo images. For this reason, the color recorded by a camera is not a reliable cue, and the color consistency assumption is no longer valid between stereo images in real scenes. Hence the performance of other SIFT-based stereo matching algorithms can be severely degraded under the radiometric variations. In this paper, we present a new improved SIFT stereo matching algorithms that is invariant to various radiometric variations between left and right images. Unlike other improved SIFT stereo matching algorithms, we explicitly employ the color formation model with the parameters of lighting geometry, illuminant color and camera gamma in SIFT descriptor. Firstly, we transform the input color images to log-chromaticity color space, thus a linear relationship can be established. Then, we use a log-polar histogram to build three color invariance components for SIFT descriptor. So that our improved SIFT descriptor is invariant to lighting geometry, illuminant color and camera gamma changes between left and right images. Then we can match feature points between two images and use SIFT descriptor Euclidean distance as a geometric measure in our data sets to make it further accurate and robust. Experimental results show that our method is superior to other SIFT-based algorithms including conventional stereo matching algorithms under various

  19. Design and Implementation of a Novel Portable 360° Stereo Camera System with Low-Cost Action Cameras

    Science.gov (United States)

    Holdener, D.; Nebiker, S.; Blaser, S.

    2017-11-01

    The demand for capturing indoor spaces is rising with the digitalization trend in the construction industry. An efficient solution for measuring challenging indoor environments is mobile mapping. Image-based systems with 360° panoramic coverage allow a rapid data acquisition and can be processed to georeferenced 3D images hosted in cloud-based 3D geoinformation services. For the multiview stereo camera system presented in this paper, a 360° coverage is achieved with a layout consisting of five horizontal stereo image pairs in a circular arrangement. The design is implemented as a low-cost solution based on a 3D printed camera rig and action cameras with fisheye lenses. The fisheye stereo system is successfully calibrated with accuracies sufficient for the applied measurement task. A comparison of 3D distances with reference data delivers maximal deviations of 3 cm on typical distances in indoor space of 2-8 m. Also the automatic computation of coloured point clouds from the stereo pairs is demonstrated.

  20. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    Directory of Open Access Journals (Sweden)

    Heegwang Kim

    2017-12-01

    Full Text Available Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  1. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    Science.gov (United States)

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  2. Quantifying geological processes on Mars - Results of the high resolution stereo camera (HRSC) on Mars express

    NARCIS (Netherlands)

    Jaumann, R.; Tirsch, D.; Hauber, E.; Ansan, V.; Di Achille, G.; Erkeling, G.; Fueten, F.; Head, J.; Kleinhans, M. G.; Mangold, N.; Michael, G. G.; Neukum, G.; Pacifici, A.; Platz, T.; Pondrelli, M.; Raack, J.; Reiss, D.; Williams, D. A.; Adeli, S.; Baratoux, D.; De Villiers, G.; Foing, B.; Gupta, S.; Gwinner, K.; Hiesinger, H.; Hoffmann, H.; Deit, L. Le; Marinangeli, L.; Matz, K. D.; Mertens, V.; Muller, J. P.; Pasckert, J. H.; Roatsch, T.; Rossi, A. P.; Scholten, F.; Sowe, M.; Voigt, J.; Warner, N.

    2015-01-01

    Abstract This review summarizes the use of High Resolution Stereo Camera (HRSC) data as an instrumental tool and its application in the analysis of geological processes and landforms on Mars during the last 10 years of operation. High-resolution digital elevations models on a local to regional scale

  3. APPLYING CCD CAMERAS IN STEREO PANORAMA SYSTEMS FOR 3D ENVIRONMENT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    A. Sh. Amini

    2012-07-01

    Full Text Available Proper recontruction of 3D environments is nowadays needed by many organizations and applications. In addition to conventional methods the use of stereo panoramas is an appropriate technique to use due to simplicity, low cost and the ability to view an environment the way it is in reality. This paper investigates the ability of applying stereo CCD cameras for 3D reconstruction and presentation of the environment and geometric measuring among that. For this purpose, a rotating stereo panorama was established using two CCDs with a base-length of 350 mm and a DVR (digital video recorder box. The stereo system was first calibrated using a 3D test-field and used to perform accurate measurements. The results of investigating the system in a real environment showed that although this kind of cameras produce noisy images and they do not have appropriate geometric stability, but they can be easily synchronized, well controlled and reasonable accuracy (about 40 mm in objects at 12 meters distance from the camera can be achieved.

  4. Interactive Augmentation of Live Images using a HDR Stereo Camera

    OpenAIRE

    Korn, Matthias; Stange, Maik; von Arb, Andreas; Blum, Lisa; Kreil, Michael; Kunze, Kathrin-Jennifer; Anhenn, Jens; Wallrath, Timo; Grosch, Thorsten

    2007-01-01

    Adding virtual objects to real environments plays an important role in todays computer graphics: Typical examples are virtual furniture in a real room and virtual characters in real movies. For a believable appearance, consistent lighting of the virtual objects is required. We present an augmented reality system that displays virtual objects with consistent illumination and shadows in the image of a simple webcam. We use two high dynamic range video cameras with fisheye lenses permanently rec...

  5. Observing Mercury: from Galileo to the stereo camera on the BepiColombo mission

    Science.gov (United States)

    Cremonese, Gabriele; Da Deppo, Vania; Naletto, Giampiero; Martellato, Elena; Debei, Stefano; Barbieri, Cesare; Bettanini, Carlo; Capria, Maria T.; Massironi, Matteo; Zaccariotto, Mirko

    2010-01-01

    After having observed the planets from his house in Padova using his telescope, in January 1611 Galileo wrote to Giuliano de Medici that Venus is moving around the Sun as Mercury. Forty years ago, Giuseppe Colombo, professor of Celestial Mechanics in Padova, made a decisive step to clarify the rotational period of Mercury. Today, scientists and engineers of the Astronomical Observatory of Padova and of the University of Padova, reunited in the Center for Space Studies and Activities (CISAS) named after Giuseppe Colombo, are busy to realize a stereo camera (STC) that will be on board the European (ESA) and Japanese (JAXA) space mission BepiColombo, devoted to the observation and exploration of the innermost planet. This paper will describe the stereo camera, which is one of the channels of the SIMBIOSYS instrument, aiming to produce the global mapping of the surface with 3D images.

  6. Structural stereopsis - Potential for automatic stereo camera calibration

    Science.gov (United States)

    Boyer, Kim L.; Sotak, George E., Jr.; Schenk, Anton F.

    1991-01-01

    The paper describes the use of extended edge features as a source of primitives for structural stereopsis and considers the design of a system for autonomous camera calibration. It is shown that the structural approach permits greater use of spatial relational constraints, eliminating the coarse-to-fine tracking of point-based algorithms. Experimental results concerning matching and calibration on real images using Laplacian-of-Gaussian contour fragments as primitives in structural stereopsis are presented, and results in graph-theoretic representation and inexact matches, analytical photogrammetry, and other computer vision and image analysis problem domains are examined. Such a system might be used in aerial photogrammetry and cartography, and robotic vision systems; however, the system is still very much under development.

  7. The Beagle 2 Stereo Camera System: Scientific Objectives and Design Characteristics

    Science.gov (United States)

    Griffiths, A.; Coates, A.; Josset, J.; Paar, G.; Sims, M.

    2003-04-01

    The Stereo Camera System (SCS) will provide wide-angle (48 degree) multi-spectral stereo imaging of the Beagle 2 landing site in Isidis Planitia with an angular resolution of 0.75 milliradians. Based on the SpaceX Modular Micro-Imager, the SCS is composed of twin cameras (with 1024 by 1024 pixel frame transfer CCD) and twin filter wheel units (with a combined total of 24 filters). The primary mission objective is to construct a digital elevation model of the area in reach of the lander’s robot arm. The SCS specifications and following baseline studies are described: Panoramic RGB colour imaging of the landing site and panoramic multi-spectral imaging at 12 distinct wavelengths to study the mineralogy of landing site. Solar observations to measure water vapour absorption and the atmospheric dust optical density. Also envisaged are multi-spectral observations of Phobos &Deimos (observations of the moons relative to background stars will be used to determine the lander’s location and orientation relative to the Martian surface), monitoring of the landing site to detect temporal changes, observation of the actions and effects of the other PAW experiments (including rock texture studies with a close-up-lens) and collaborative observations with the Mars Express orbiter instrument teams. Due to be launched in May of this year, the total system mass is 360 g, the required volume envelope is 747 cm^3 and the average power consumption is 1.8 W. A 10Mbit/s RS422 bus connects each camera to the lander common electronics.

  8. Stereo camera based virtual cane system with identifiable distance tactile feedback for the blind.

    Science.gov (United States)

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-06-13

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.

  9. Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind

    Directory of Open Access Journals (Sweden)

    Donghun Kim

    2014-06-01

    Full Text Available In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user’s pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.

  10. Full-parallax 3D display from stereo-hybrid 3D camera system

    Science.gov (United States)

    Hong, Seokmin; Ansari, Amir; Saavedra, Genaro; Martinez-Corral, Manuel

    2018-04-01

    In this paper, we propose an innovative approach for the production of the microimages ready to display onto an integral-imaging monitor. Our main contribution is using a stereo-hybrid 3D camera system, which is used for picking up a 3D data pair and composing a denser point cloud. However, there is an intrinsic difficulty in the fact that hybrid sensors have dissimilarities and therefore should be equalized. Handled data facilitate to generating an integral image after projecting computationally the information through a virtual pinhole array. We illustrate this procedure with some imaging experiments that provide microimages with enhanced quality. After projection of such microimages onto the integral-imaging monitor, 3D images are produced with great parallax and viewing angle.

  11. A UAV-BASED LOW-COST STEREO CAMERA SYSTEM FOR ARCHAEOLOGICAL SURVEYS – EXPERIENCES FROM DOLICHE (TURKEY

    Directory of Open Access Journals (Sweden)

    K. Haubeck

    2013-08-01

    Full Text Available The use of Unmanned Aerial Vehicles (UAVs for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but – under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions – at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  12. An evaluation of deep-sea benthic megafauna length measurements obtained with laser and stereo camera methods

    Science.gov (United States)

    Dunlop, Katherine M.; Kuhnz, Linda A.; Ruhl, Henry A.; Huffard, Christine L.; Caress, David W.; Henthorn, Richard G.; Hobson, Brett W.; McGill, Paul; Smith, Kenneth L.

    2015-02-01

    The 25 year time-series collected at Station M, ~4000 m on the Monterey Deep-sea Fan, has substantially improved understanding of the role of the deep-ocean benthic environment in the global carbon cycle. However, the role of deep-ocean benthic megafauna in carbon bioturbation, remineralization and sequestration is relatively unknown. It is important to gather both accurate and precise measurements of megafaunal community abundance, size distribution and biomass to further define their role in deep-sea carbon cycling and possible sequestration. This study describes initial results from a stereo camera system attached to a remotely operated vehicle and analyzed using the EventMeasure photogrammetric measurement software to estimate the density, length and biomass of 10 species of mobile epibenthic megafauna. Stereo length estimates were compared to those from a single video camera system equipped with sizing lasers and analyzed using the Monterey Bay Aquarium Research Institute's Video Annotation and Reference System. Both camera systems and software were capable of high measurement accuracy and precision (megafauna species studied. The stereo image analysis process took substantially longer than the video analysis and the value of the EventMeasure software tool would be improved with developments in analysis automation. The stereo system is less influenced by object orientation and height, and is potentially a useful tool to be mounted on an autonomous underwater vehicle and for measuring deep-sea pelagic animals where the use of lasers is not feasible.

  13. Joint depth map and color consistency estimation for stereo images with different illuminations and cameras.

    Science.gov (United States)

    Heo, Yong Seok; Lee, Kyoung Mu; Lee, Sang Uk

    2013-05-01

    Abstract—In this paper, we propose a method that infers both accurate depth maps and color-consistent stereo images for radiometrically varying stereo images. In general, stereo matching and performing color consistency between stereo images are a chicken-and-egg problem since it is not a trivial task to simultaneously achieve both goals. Hence, we have developed an iterative framework in which these two processes can boost each other. First, we transform the input color images to log-chromaticity color space, from which a linear relationship can be established during constructing a joint pdf of transformed left and right color images. From this joint pdf, we can estimate a linear function that relates the corresponding pixels in stereo images. Based on this linear property, we present a new stereo matching cost by combining Mutual Information (MI), SIFT descriptor, and segment-based plane-fitting to robustly find correspondence for stereo image pairs which undergo radiometric variations. Meanwhile, we devise a Stereo Color Histogram Equalization (SCHE) method to produce color-consistent stereo image pairs, which conversely boost the disparity map estimation. Experimental results show that our method produces both accurate depth maps and color-consistent stereo images, even for stereo images with severe radiometric differences.

  14. Hadriaca Patera: Insights into its volcanic history from Mars Express High Resolution Stereo Camera

    Science.gov (United States)

    Williams, David A.; Greeley, Ronald; Zuschneid, Wilhelm; Werner, Stephanie C.; Neukum, Gerhard; Crown, David A.; Gregg, Tracy K. P.; Gwinner, Klaus; Raitala, Jouko

    2007-10-01

    High Resolution Stereo Camera (HRSC) images of Hadriaca Patera, Mars, in combination with Mars Orbiter Camera (MOC), Mars Orbiter Laser Altimeter (MOLA), and Thermal Infrared Imaging System (THEMIS) data sets, reveal morphologic details about this volcano and enable determination of a chronology of the major geologic events through new cratering age assessments. New topographic measurements of the Hadriaca edifice were also made from a HRSC-based high-resolution (125 m) digital terrain model (DTM) and compared to the MOLA DTM. We find evidence for a complex formation and erosional history at Hadriaca Patera, in which volcanic, fluvial, and aeolian processes were all involved. Crater counts and associated model ages suggest that Hadriaca Patera formed from early shield-building volcanic (likely explosive pyroclastic) eruptions at ~3.7-3.9 Ga, with caldera formation no later than ~3.5 Ga. A variety of geologic activity occurred in the caldera and on the northern flank and plains at ~3.3-3.5 Ga, likely including pyroclastic flows (that partially filled a large crater NW of the caldera, and plains to the NE) and differential erosion/deposition by aeolian and/or fluvial activity. There were some resurfacing event(s) in the caldera and on the eastern flank at ~2.4-2.6 Ga, in which the eastern flank's morphology is indicative of fluvial erosion. The most recent dateable geologic activity on Hadriaca Patera includes caldera resurfacing by some process (most likely differential aeolian erosion/deposition) in the Amazonian Period, as recent as ~1.5 Ga. This is coincident with the resurfacing of the heavily channeled south flank by fluvial erosion. Unlike the Tharsis shields, major geologic activity ended at Hadriaca Patera over a billion years ago.

  15. 3D MODELLING OF AN INDOOR SPACE USING A ROTATING STEREO FRAME CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    J. Kang

    2016-06-01

    Full Text Available Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.

  16. D Modelling of AN Indoor Space Using a Rotating Stereo Frame Camera System

    Science.gov (United States)

    Kang, J.; Lee, I.

    2016-06-01

    Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.

  17. Tyrrhena Patera: Geologic history derived from Mars Express High Resolution Stereo Camera

    Science.gov (United States)

    Williams, David A.; Greeley, Ronald; Werner, Stephanie C.; Michael, Greg; Crown, David A.; Neukum, Gerhard; Raitala, Jouko

    2008-11-01

    We used Mars Express High Resolution Stereo Camera images of the Tyrrhena Patera volcano to assign cratering model ages to material units defined in the Viking Orbiter-based geologic mapping. Cratering model ages are generally consistent with their stratigraphy. We can identify three key intervals of major activity at Tyrrhena Patera: (1) formation of the volcanic edifice in the Noachian Period, ~3.7-4.0 Ga, shortly following the Hellas impact (~4 Ga) and coincident with the formation of Hadriaca Patera (~3.9 Ga); (2) modification of the edifice and formation of the caldera rille and channels in the Hesperian Period, possibly extending into the Amazonian Period; and (3) a final stage of modification in the Late Amazonian Epoch, ~0.8-1.4 Ga. Early- to mid-Hesperian activity on Tyrrhena Patera is consistent with similar activity on Hadriaca Patera at ~3.3-3.7 Ga. The most recent dateable event on Tyrrhena Patera is modification on the upper shield, caldera rille, and channel floors at ~800 Ma. This coincidence of resurfacing in three units suggests a widespread process(es), which we speculate involved preferential (aeolian?) erosion of small craters on these flatter surfaces relative to the other units on the volcano. Alternatively, some combination of pyroclastic flow emplacement on the upper shield and fluvial activity in the caldera rille and channels, followed by differential aeolian erosion and deposition, could have produced the present surface. Regardless, major geologic resurfacing ended at Tyrrhena Patera nearly a billion years ago.

  18. SU-F-J-140: Using Handheld Stereo Depth Cameras to Extend Medical Imaging for Radiation Therapy Planning

    Energy Technology Data Exchange (ETDEWEB)

    Jenkins, C; Xing, L; Yu, S [Stanford University, Stanford, CA (United States)

    2016-06-15

    Purpose: A correct body contour is essential for the accuracy of dose calculation in radiation therapy. While modern medical imaging technologies provide highly accurate representations of body contours, there are times when a patient’s anatomy cannot be fully captured or there is a lack of easy access to CT/MRI scanning. Recently, handheld cameras have emerged that are capable of performing three dimensional (3D) scans of patient surface anatomy. By combining 3D camera and medical imaging data, the patient’s surface contour can be fully captured. Methods: A proof-of-concept system matches a patient surface model, created using a handheld stereo depth camera (DC), to the available areas of a body contour segmented from a CT scan. The matched surface contour is then converted to a DICOM structure and added to the CT dataset to provide additional contour information. In order to evaluate the system, a 3D model of a patient was created by segmenting the body contour with a treatment planning system (TPS) and fabricated with a 3D printer. A DC and associated software were used to create a 3D scan of the printed phantom. The surface created by the camera was then registered to a CT model that had been cropped to simulate missing scan data. The aligned surface was then imported into the TPS and compared with the originally segmented contour. Results: The RMS error for the alignment between the camera and cropped CT models was 2.26 mm. Mean distance between the aligned camera surface and ground truth model was −1.23 +/−2.47 mm. Maximum deviations were < 1 cm and occurred in areas of high concavity or where anatomy was close to the couch. Conclusion: The proof-of-concept study shows an accurate, easy and affordable method to extend medical imaging for radiation therapy planning using 3D cameras without additional radiation. Intel provided the camera hardware used in this study.

  19. An Optical Tracking System based on Hybrid Stereo/Single-View Registration and Controlled Cameras

    OpenAIRE

    Cortes , Guillaume; Marchand , Eric; Ardouin , Jérôme; Lécuyer , Anatole

    2017-01-01

    International audience; Optical tracking is widely used in robotics applications such as unmanned aerial vehicle (UAV) localization. Unfortunately, such systems require many cameras and are, consequently, expensive. In this paper, we propose an approach to considerably increase the optical tracking volume without adding cameras. First, when the target becomes no longer visible by at least two cameras we propose a single-view tracking mode which requires only one camera. Furthermore, we propos...

  20. New Stereo Vision Digital Camera System for Simultaneous Measurement of Cloud Base Height and Atmospheric Visibility

    Science.gov (United States)

    Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.

    2013-12-01

    Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that

  1. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    Science.gov (United States)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  2. Preliminary results of the optical calibration for the stereo camera STC onboard the Bepicolombo mission

    Science.gov (United States)

    Da Deppo, V.; Martellato, E.; Simioni, E.; Borrelli, D.; Dami, M.; Aroldi, G.; Naletto, G.; Veltroni, I. Ficai; Cremonese, G.

    2017-11-01

    BepiColombo is one of the cornerstone missions of the European Space Agency dedicated to the exploration of the planet Mercury and it is expected to be launched in July 2016. One of the BepiColombo instruments is the STereoscopic imaging Channel (STC), which is a channel of the Spectrometers and Imagers for MPO BepiColombo Integrated Observatory SYStem (SIMBIOSYS) suite: an integrated system for imaging and spectroscopic investigation of the Mercury surface. STC main aim is the 3D global mapping of the entire surface of the planet Mercury during the BepiColombo one year nominal mission. The STC instrument consists in a novel concept of stereocamera: two identical cameras (sub-channels) looking at +/-20° from nadir which share most of the optical components and the detector. Being the detector a 2D matrix, STC is able to adopt the push-frame acquisition technique instead of the much common push-broom one. The camera has the capability of imaging in five different spectral bands: one panchromatic and four intermediate bands, in the range between 410 and 930 nm. To avoid mechanisms, the technical solution chosen for the filters is the single substrate stripe-butted filter in which different glass pieces, with different transmission properties, are glued together and positioned just in front of the detector. The useful field of view (FoV) of each sub-channel, though divided in 3 strips, is about 5.3° x 3.2°. The optical design, a modified Schmidt layout, is able to guarantee that over all the FoV the diffraction Ensquared Energy inside one pixel of the detector is of the order of 70-80%. To effectively test and calibrate the overall STC channel, an ad hoc Optical Ground Support Equipment has been developed. Each of the sub-channels has to be separately calibrated, but also the data of one sub-channel have to be easily correlated with the other one. In this paper, the experimental results obtained by the analysis of the data acquired during the preliminary onground

  3. Enhancing Positioning Accuracy in Urban Terrain by Fusing Data from a GPS Receiver, Inertial Sensors, Stereo-Camera and Digital Maps for Pedestrian Navigation

    Directory of Open Access Journals (Sweden)

    Pawel Strumillo

    2012-05-01

    Full Text Available The paper presents an algorithm for estimating a pedestrian location in an urban environment. The algorithm is based on the particle filter and uses different data sources: a GPS receiver, inertial sensors, probability maps and a stereo camera. Inertial sensors are used to estimate a relative displacement of a pedestrian. A gyroscope estimates a change in the heading direction. An accelerometer is used to count a pedestrian’s steps and their lengths. The so-called probability maps help to limit GPS inaccuracy by imposing constraints on pedestrian kinematics, e.g., it is assumed that a pedestrian cannot cross buildings, fences etc. This limits position inaccuracy to ca. 10 m. Incorporation of depth estimates derived from a stereo camera that are compared to the 3D model of an environment has enabled further reduction of positioning errors. As a result, for 90% of the time, the algorithm is able to estimate a pedestrian location with an error smaller than 2 m, compared to an error of 6.5 m for a navigation based solely on GPS.

  4. Enhancing positioning accuracy in urban terrain by fusing data from a GPS receiver, inertial sensors, stereo-camera and digital maps for pedestrian navigation.

    Science.gov (United States)

    Przemyslaw, Baranski; Pawel, Strumillo

    2012-01-01

    The paper presents an algorithm for estimating a pedestrian location in an urban environment. The algorithm is based on the particle filter and uses different data sources: a GPS receiver, inertial sensors, probability maps and a stereo camera. Inertial sensors are used to estimate a relative displacement of a pedestrian. A gyroscope estimates a change in the heading direction. An accelerometer is used to count a pedestrian's steps and their lengths. The so-called probability maps help to limit GPS inaccuracy by imposing constraints on pedestrian kinematics, e.g., it is assumed that a pedestrian cannot cross buildings, fences etc. This limits position inaccuracy to ca. 10 m. Incorporation of depth estimates derived from a stereo camera that are compared to the 3D model of an environment has enabled further reduction of positioning errors. As a result, for 90% of the time, the algorithm is able to estimate a pedestrian location with an error smaller than 2 m, compared to an error of 6.5 m for a navigation based solely on GPS.

  5. Quantitative analysis of digital outcrop data obtained from stereo-imagery using an emulator for the PanCam camera system for the ExoMars 2020 rover

    Science.gov (United States)

    Barnes, Robert; Gupta, Sanjeev; Gunn, Matt; Paar, Gerhard; Balme, Matt; Huber, Ben; Bauer, Arnold; Furya, Komyo; Caballo-Perucha, Maria del Pilar; Traxler, Chris; Hesina, Gerd; Ortner, Thomas; Banham, Steven; Harris, Jennifer; Muller, Jan-Peter; Tao, Yu

    2017-04-01

    A key focus of planetary rover missions is to use panoramic camera systems to image outcrops along rover traverses, in order to characterise their geology in search of ancient life. This data can be processed to create 3D point clouds of rock outcrops to be quantitatively analysed. The Mars Utah Rover Field Investigation (MURFI 2016) is a Mars Rover field analogue mission run by the UK Space Agency (UKSA) in collaboration with the Canadian Space Agency (CSA). It took place between 22nd October and 13th November 2016 and consisted of a science team based in Harwell, UK, and a field team including an instrumented Rover platform at the field site near Hanksville (Utah, USA). The Aberystwyth University PanCam Emulator 3 (AUPE3) camera system was used to collect stereo panoramas of the terrain the rover encountered during the field trials. Stereo-imagery processed in PRoViP is rendered as Ordered Point Clouds (OPCs) in PRo3D, enabling the user to zoom, rotate and translate the 3D outcrop model. Interpretations can be digitised directly onto the 3D surface, and simple measurements can be taken of the dimensions of the outcrop and sedimentary features, including grain size. Dip and strike of bedding planes, stratigraphic and sedimentological boundaries and fractures is calculated within PRo3D from mapped bedding contacts and fracture traces. Merging of rover-derived imagery with UAV and orbital datasets, to build semi-regional multi-resolution 3D models of the area of operations for immersive analysis and contextual understanding. In-simulation, AUPE3 was mounted onto the rover mast, collecting 16 stereo panoramas over 9 'sols'. 5 out-of-simulation datasets were collected in the Hanksville-Burpee Quarry. Stereo panoramas were processed using an automated pipeline and data transfer through an ftp server. PRo3D has been used for visualisation and analysis of this stereo data. Features of interest in the area could be annotated, and their distances between to the rover

  6. Hyper thin 3D edge measurement of honeycomb core structures based on the triangular camera-projector layout & phase-based stereo matching.

    Science.gov (United States)

    Jiang, Hongzhi; Zhao, Huijie; Li, Xudong; Quan, Chenggen

    2016-03-07

    We propose a novel hyper thin 3D edge measurement technique to measure the profile of 3D outer envelope of honeycomb core structures. The width of the edges of the honeycomb core is less than 0.1 mm. We introduce a triangular layout design consisting of two cameras and one projector to measure hyper thin 3D edges and eliminate data interference from the walls. A phase-shifting algorithm and the multi-frequency heterodyne phase-unwrapping principle are applied for phase retrievals on edges. A new stereo matching method based on phase mapping and epipolar constraint is presented to solve correspondence searching on the edges and remove false matches resulting in 3D outliers. Experimental results demonstrate the effectiveness of the proposed method for measuring the 3D profile of honeycomb core structures.

  7. Deep vision: an in-trawl stereo camera makes a step forward in monitoring the pelagic community.

    Directory of Open Access Journals (Sweden)

    Melanie J Underwood

    Full Text Available Ecosystem surveys are carried out annually in the Barents Sea by Russia and Norway to monitor the spatial distribution of ecosystem components and to study population dynamics. One component of the survey is mapping the upper pelagic zone using a trawl towed at several depths. However, the current technique with a single codend does not provide fine-scale spatial data needed to directly study species overlaps. An in-trawl camera system, Deep Vision, was mounted in front of the codend in order to acquire continuous images of all organisms passing. It was possible to identify and quantify of most young-of-the-year fish (e.g. Gadus morhua, Boreogadus saida and Reinhardtius hippoglossoides and zooplankton, including Ctenophora, which are usually damaged in the codend. The system showed potential for measuring the length of small organisms and also recorded the vertical and horizontal positions where individuals were imaged. Young-of-the-year fish were difficult to identify when passing the camera at maximum range and to quantify during high densities. In addition, a large number of fish with damaged opercula were observed passing the Deep Vision camera during heaving; suggesting individuals had become entangled in meshes farther forward in the trawl. This indicates that unknown numbers of fish are probably lost in forward sections of the trawl and that the heaving procedure may influence the number of fish entering the codend, with implications for abundance indices and understanding population dynamics. This study suggests modifications to the Deep Vision and the trawl to increase our understanding of the population dynamics.

  8. Stereo-panoramic Data

    KAUST Repository

    Cutchin, Steve

    2013-03-07

    Systems and methods for automatically generating three-dimensional panoramic images for use in various virtual reality settings are disclosed. One embodiment of the system includes a stereo camera capture device (SCD), a programmable camera controller (PCC) that rotates, orients, and controls the SCD, a robotic maneuvering platform (RMP), and a path and adaptation controller (PAC). In that embodiment, the PAC determines the movement of the system based on an original desired path and input gathered from the SCD during an image capture process.

  9. TRISH: the Toronto-IRIS Stereo Head

    Science.gov (United States)

    Jenkin, Michael R. M.; Milios, Evangelos E.; Tsotsos, John K.

    1992-03-01

    This paper introduces and motivates the design of a controllable stereo vision head. The Toronto IRIS stereo head (TRISH) is a binocular camera mount consisting of two AGC, fixed focal length color cameras forming a verging stereo pair. TRISH is capable of version (rotation of the eyes about the vertical axis so as to maintain a constant disparity), vergence (rotation of the eyes about the vertical axis so as to change the disparity), pan (rotation of the entire head about the vertical axis), and tilt (rotation of the eyes about the horizontal axis). One novel characteristic of the design is that the two cameras can rotate about their own optical axes (torsion). Torsion movement makes it possible to minimize the vertical component of the two-dimensional search which is associated with stereo processing in verging stereo systems.

  10. Approaches for Stereo Matching

    Directory of Open Access Journals (Sweden)

    Takouhi Ozanian

    1995-04-01

    Full Text Available This review focuses on the last decade's development of the computational stereopsis for recovering three-dimensional information. The main components of the stereo analysis are exposed: image acquisition and camera modeling, feature selection, feature matching and disparity interpretation. A brief survey is given of the well known feature selection approaches and the estimation parameters for this selection are mentioned. The difficulties in identifying correspondent locations in the two images are explained. Methods as to how effectively to constrain the search for correct solution of the correspondence problem are discussed, as are strategies for the whole matching process. Reasons for the occurrence of matching errors are considered. Some recently proposed approaches, employing new ideas in the modeling of stereo matching in terms of energy minimization, are described. Acknowledging the importance of computation time for real-time applications, special attention is paid to parallelism as a way to achieve the required level of performance. The development of trinocular stereo analysis as an alternative to the conventional binocular one, is described. Finally a classification based on the test images for verification of the stereo matching algorithms, is supplied.

  11. Steering a simulated unmanned aerial vehicle using a head-slaved camera and HMD : effects of HMD quality, visible vehicle references and extended stereo cueing

    NARCIS (Netherlands)

    Vries, S.C. de; Padmos, P.

    1998-01-01

    De ondersteuning van de besturing van een Unmanned Aerial Vehicle d.m.v. een hoofdbeweginggekoppelde camera werd onderzocht. Er bleek dat de high-end n-Vision datavisor HMD betere prestatie opleverde dan de low-end Virtual IO i-glasses.

  12. Pemancar Am Stereo

    OpenAIRE

    Amir, Ardi

    2011-01-01

    In this project created an AM STEREO transmitter that the signal can be received on receiving AM MONOand AM STEREO. To obtain broadcast quality, should be using the receiver AM STEREO. The advantages ofthis transmitter broadcasts a bit cleaner when compared with AM MONO, but the transmitter AM STEREOwas not hifi when compared with STEREO FM transmitter. Transmitter consists of four sections of the mostimportant are:-Isolator-Matrix Audio-Phase Modulator-Amplitude Modulator

  13. Stationary Stereo-Video Camera Stations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Accurate and precise stock assessments are predicated on accurate and precise estimates of life history parameters, abundance, and catch across the range of the...

  14. Feasibility of remote evaporation and precipitation estimates. [by stereo images

    Science.gov (United States)

    Sadeh, W. Z.

    1974-01-01

    Remote sensing by means of stereo images obtained from flown cameras and scanners provides the potential to monitor the dynamics of pollutant mixing over large areas. Moreover, stereo technology may permit monitoring of pollutant concentration and mixing with sufficient detail to ascertain the structure of a polluted air mass. Consequently, stereo remote systems can be employed to supply data to set forth adequate regional standards on air quality. A method of remote sensing using stereo images is described. Preliminary results concerning the planar extent of a plume based on comparison with ground measurements by an alternate method, e.g., remote hot-wire anemometer technique, are supporting the feasibility of using stereo remote sensing systems.

  15. Analysis of Disparity Error for Stereo Autofocus.

    Science.gov (United States)

    Yang, Cheng-Chieh; Huang, Shao-Kang; Shih, Kuang-Tsu; Chen, Homer H

    2018-04-01

    As more and more stereo cameras are installed on electronic devices, we are motivated to investigate how to leverage disparity information for autofocus. The main challenge is that stereo images captured for disparity estimation are subject to defocus blur unless the lenses of the stereo cameras are at the in-focus position. Therefore, it is important to investigate how the presence of defocus blur would affect stereo matching and, in turn, the performance of disparity estimation. In this paper, we give an analytical treatment of this fundamental issue of disparity-based autofocus by examining the relation between image sharpness and disparity error. A statistical approach that treats the disparity estimate as a random variable is developed. Our analysis provides a theoretical backbone for the empirical observation that, regardless of the initial lens position, disparity-based autofocus can bring the lens to the hill zone of the focus profile in one movement. The insight gained from the analysis is useful for the implementation of an autofocus system.

  16. Stereo Painting Display Devices

    Science.gov (United States)

    Shafer, David

    1982-06-01

    The Spanish Surrealist artist Salvador Dali has recently perfected the art of producing two paintings which are stereo pairs. Each painting is separately quite remarkable, presenting a subject with the vivid realism and clarity for which Dali is famous. Due to the surrealistic themes of Dali's art, however, the subjects preser.ted with such naturalism only exist in his imagination. Despite this considerable obstacle to producing stereo art, Dali has managed to paint stereo pairs that display subtle differences of coloring and lighting, in addition to the essential perspective differences. These stereo paintings require a display method that will allow the viewer to experience stereo fusion, but which will not degrade the high quality of the art work. This paper gives a review of several display methods that seem promising in terms of economy, size, adjustability, and image quality.

  17. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    OpenAIRE

    Flavio Roberti; Juan Marcos Toibero; Carlos Soria; Raquel Frizera Vassallo; Ricardo Carelli

    2009-01-01

    This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras) for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimen...

  18. JAVA Stereo Display Toolkit

    Science.gov (United States)

    Edmonds, Karina

    2008-01-01

    This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.

  19. Parametric Coding of Stereo Audio

    Directory of Open Access Journals (Sweden)

    Erik Schuijers

    2005-06-01

    Full Text Available Parametric-stereo coding is a technique to efficiently code a stereo audio signal as a monaural signal plus a small amount of parametric overhead to describe the stereo image. The stereo properties are analyzed, encoded, and reinstated in a decoder according to spatial psychoacoustical principles. The monaural signal can be encoded using any (conventional audio coder. Experiments show that the parameterized description of spatial properties enables a highly efficient, high-quality stereo audio representation.

  20. Opportunity's Surroundings on Sol 1798 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11850 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11850 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo 180-degree view of the rover's surroundings during the 1,798th Martian day, or sol, of Opportunity's surface mission (Feb. 13, 2009). North is on top. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 111 meters (364 feet) southward on the preceding sol. Tracks from that drive recede northward in this view. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  1. Spirit Beside 'Home Plate,' Sol 1809 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11803 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11803 NASA Mars Exploration Rover Spirit used its navigation camera to take the images assembled into this stereo, 120-degree view southward after a short drive during the 1,809th Martian day, or sol, of Spirit's mission on the surface of Mars (February 3, 2009). By combining images from the left-eye and right-eye sides of the navigation camera, the view appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Spirit had driven about 2.6 meters (8.5 feet) that sol, continuing a clockwise route around a low plateau called 'Home Plate.' In this image, the rocks visible above the rovers' solar panels are on the slope at the northern edge of Home Plate. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  2. Single-sensor multispeaker listening with acoustic metamaterials.

    Science.gov (United States)

    Xie, Yangbo; Tsai, Tsung-Han; Konneker, Adam; Popa, Bogdan-Ioan; Brady, David J; Cummer, Steven A

    2015-08-25

    Designing a "cocktail party listener" that functionally mimics the selective perception of a human auditory system has been pursued over the past decades. By exploiting acoustic metamaterials and compressive sensing, we present here a single-sensor listening device that separates simultaneous overlapping sounds from different sources. The device with a compact array of resonant metamaterials is demonstrated to distinguish three overlapping and independent sources with 96.67% correct audio recognition. Segregation of the audio signals is achieved using physical layer encoding without relying on source characteristics. This hardware approach to multichannel source separation can be applied to robust speech recognition and hearing aids and may be extended to other acoustic imaging and sensing applications.

  3. An overview of the stereo correlation and triangulation formulations used in DICe.

    Energy Technology Data Exchange (ETDEWEB)

    Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    This document provides a detailed overview of the stereo correlation algorithm and triangulation formulation used in the Digital Image Correlation Engine (DICe) to triangulate three dimensional motion in space given the image coordinates and camera calibration parameters.

  4. Monocular Stereo Measurement Using High-Speed Catadioptric Tracking.

    Science.gov (United States)

    Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishi, Idaku

    2017-08-09

    This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512 × 512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system.

  5. Enhancement Strategies for Frame-To Uas Stereo Visual Odometry

    Science.gov (United States)

    Kersten, J.; Rodehorst, V.

    2016-06-01

    Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.

  6. Stereo and IMU-Assisted Visual Odometry for Small Robots

    Science.gov (United States)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  7. Stereo-particle image velocimetry uncertainty quantification

    Science.gov (United States)

    Bhattacharya, Sayantan; Charonko, John J.; Vlachos, Pavlos P.

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  8. Stereo-particle image velocimetry uncertainty quantification

    International Nuclear Information System (INIS)

    Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  9. Study on flexible calibration method for binocular stereo vision system

    Science.gov (United States)

    Wang, Peng; Sun, Huashu; Sun, Changku

    2008-12-01

    Using a binocular stereo vision system for 3D coordinate measurement, system calibration is an important factor for measurement precision. In this paper we present a flexible calibration method for binocular stereo system calibration to estimate the intrinsic and extrinsic parameters of each camera and the exterior orientation of the turntable's axis which is installed in front of the binocular stereo vision system to increase the system measurement range. Using a new flexible planar pattern with four big circles and an array of small circles as reference points for calibration, binocular stereo calibration is realized with Zhang Plane-based calibration method without specialized knowledge of 3D geometry. By putting a standard ball in front of the binocular stereo vision system, a sequence pictures is taken at the same by both camera with a few different rotation angles of the turntable. With the method of space intersection of two straight lines, the reference points, the ball center at each turntable rotation angles, for axis calibration are figured out. Because of the rotation of the turntable, the trace of ball is a circle, whose center is on the turntable's axis. All ball centers rotated are in a plane perpendicular to the axis. The exterior orientation of the turntable axis is calibrated according to the calibration model. The measurement on a column bearing is performed in the experiment, with the final measurement precision better than 0.02mm.

  10. GPU-based real-time trinocular stereo vision

    Science.gov (United States)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  11. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    Directory of Open Access Journals (Sweden)

    Flavio Roberti

    2010-02-01

    Full Text Available This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimental results with real mobile robots are included to validate the proposed vision system.

  12. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    Directory of Open Access Journals (Sweden)

    Flavio Roberti

    2009-12-01

    Full Text Available This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimental results with real mobile robots are included to validate the proposed vision system.

  13. Pengukuran Jarak Berbasiskan Stereo Vision

    Directory of Open Access Journals (Sweden)

    Iman Herwidiana Kartowisastro

    2010-12-01

    Full Text Available Measuring distance from an object can be conducted in a variety of ways, including by making use of distance measuring sensors such as ultrasonic sensors, or using the approach based vision system. This last mode has advantages in terms of flexibility, namely a monitored object has relatively no restrictions characteristic of the materials used at the same time it also has its own difficulties associated with object orientation and state of the room where the object is located. To overcome this problem, so this study examines the possibility of using stereo vision to measure the distance to an object. The system was developed starting from image extraction, information extraction characteristics of the objects contained in the image and visual distance measurement process with 2 separate cameras placed in a distance of 70 cm. The measurement object can be in the range of 50 cm - 130 cm with a percentage error of 5:53%. Lighting conditions (homogeneity and intensity has a great influence on the accuracy of the measurement results. 

  14. The STEREO Mission

    CERN Document Server

    2008-01-01

    The STEREO mission uses twin heliospheric orbiters to track solar disturbances from their initiation to 1 AU. This book documents the mission, its objectives, the spacecraft that execute it and the instruments that provide the measurements, both remote sensing and in situ. This mission promises to unlock many of the mysteries of how the Sun produces what has become to be known as space weather.

  15. Multispectral and stereo imaging on Mars

    Science.gov (United States)

    Levinthal, E. C.; Huck, F. O.

    1976-01-01

    Relevant aspects of the design and function of the two-window Viking Landing Camera system are described, with particular reference to some results of its operation on Mars during the Viking mission. A major feature of the system is that the optical tunnel between the lens and the photosensor array contains a multiaperture baffle designed to reduce veiling glare and to attenuate radio frequency interference from the lander antennas. The principle of operation of the contour mode is described. The accuracy is limited by the stereo base, resolution of camera picture elements, and geometric calibration. To help determine the desirability as well as the safety of possible sample sites, use is made of both radiometric and photogrammetric information for each picture element to combine high-resolution pictures with low-resolution color pictures of the same area. Explanatory photographs supplement the text.

  16. Pancam Peek into 'Victoria Crater' (Stereo)

    Science.gov (United States)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA08776 [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA08776 A drive of about 60 meters (about 200 feet) on the 943rd Martian day, or sol, of Opportunity's exploration of Mars' Meridiani Planum region (Sept. 18, 2006) brought the NASA rover to within about 50 meters (about 160 feet) of the rim of 'Victoria Crater.' This crater has been the mission's long-term destination for the past 21 Earth months. Opportunity reached a location from which the cameras on top of the rover's mast could begin to see into the interior of Victoria. This stereo anaglyph was made from frames taken on sol 943 by the panoramic camera (Pancam) to offer a three-dimensional view when seen through red-blue glasses. It shows the upper portion of interior crater walls facing toward Opportunity from up to about 850 meters (half a mile) away. The amount of vertical relief visible at the top of the interior walls from this angle is about 15 meters (about 50 feet). The exposures were taken through a Pancam filter selecting wavelengths centered on 750 nanometers. Victoria Crater is about five times wider than 'Endurance Crater,' which Opportunity spent six months examining in 2004, and about 40 times wider than 'Eagle Crater,' where Opportunity first landed. The great lure of Victoria is the expectation that a thick stack of geological layers will be exposed in the crater walls, potentially several times the thickness that was previously studied at Endurance and therefore, potentially preserving several times the historical record.

  17. Opportunity's Surroundings After Sol 1820 Drive (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11841 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11841 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,820th to 1,822nd Martian days, or sols, of Opportunity's surface mission (March 7 to 9, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 20.6 meters toward the northwest on Sol 1820 before beginning to take the frames in this view. Tracks from that drive recede southwestward. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and small exposures of lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  18. Opportunity's Surroundings on Sol 1818 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11846 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11846 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,818th Martian day, or sol, of Opportunity's surface mission (March 5, 2009). South is at the center; north at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 80.3 meters (263 feet) southward earlier on that sol. Tracks from the drive recede northward in this view. The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  19. Three-Dimensional Stereo Reconstruction and Sensor Registration With Application to the Development of a Multi-Sensor Database

    National Research Council Canada - National Science Library

    Oberle, William

    2002-01-01

    ... and the transformations between the camera system and other sensor, vehicle, and world coordinate systems. Results indicate that the measured stereo and ladar data are susceptible to large errors that affect the accuracy of the calculated transformations.

  20. Surrounding Moving Obstacle Detection for Autonomous Driving Using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Hao Sun

    2013-06-01

    Full Text Available Detection and tracking surrounding moving obstacles such as vehicles and pedestrians are crucial for the safety of mobile robotics and autonomous vehicles. This is especially the case in urban driving scenarios. This paper presents a novel framework for surrounding moving obstacles detection using binocular stereo vision. The contributions of our work are threefold. Firstly, a multiview feature matching scheme is presented for simultaneous stereo correspondence and motion correspondence searching. Secondly, the multiview geometry constraint derived from the relative camera positions in pairs of consecutive stereo views is exploited for surrounding moving obstacles detection. Thirdly, an adaptive particle filter is proposed for tracking of multiple moving obstacles in surrounding areas. Experimental results from real-world driving sequences demonstrate the effectiveness and robustness of the proposed framework.

  1. Hearing symptoms personal stereos.

    Science.gov (United States)

    da Luz, Tiara Santos; Borja, Ana Lúcia Vieira de Freitas

    2012-04-01

     Practical and portable the personal stereos if had become almost indispensable accessories in the day the day. Studies disclose that the portable players of music can cause auditory damages in the long run for who hear music in high volume for a drawn out time.  to verify the prevalence of auditory symptoms in users of amplified players and to know its habits of use  Observational prospective study of transversal cut carried through in three institutions of education of the city of Salvador BA, being two of public net and one of the private net. 400 students had answered to the questionnaire, of both the sex, between 14 and 30 years that had related the habit to use personal stereos.  The symptoms most prevalent had been hyperacusis (43.5%), auricular fullness (30.5%) and humming (27.5), being that the humming is the symptom most present in the population youngest. How much to the daily habits: 62.3% frequent use, 57% in raised intensities, 34% in drawn out periods. An inverse relation between exposition time was verified and the band of age (p = 0,000) and direct with the prevalence of the humming.  Although to admit to have knowledge on the damages that the exposition the sound of high intensity can cause the hearing, the daily habits of the young evidence the inadequate use of the portable stereos characterized by long periods of exposition, raised intensities, frequent use and preference for the insertion phones. The high prevalence of symptoms after the use suggests a bigger risk for the hearing of these young.

  2. Hearing symptoms personal stereos

    Directory of Open Access Journals (Sweden)

    Tiara Santos da Luz1

    2012-01-01

    Full Text Available Introduction: Practical and portable the personal stereos if had become almost indispensable accessories in the day the day. Studies disclose that the portable players of music can cause auditory damages in the long run for who hear music in high volume for a drawn out time. Objective: to verify the prevalence of auditory symptoms in users of amplified players and to know its habits of use. Method: Observational prospective study of transversal cut carried through in three institutions of education of the city of Salvador BA, being two of public net and one of the private net. 400 students had answered to the questionnaire, of both the sex, between 14 and 30 years that had related the habit to use personal stereos. Results: The symptoms most prevalent had been hyperacusis (43.5%, auricular fullness (30.5% and humming (27.5, being that the humming is the symptom most present in the population youngest. How much to the daily habits: 62.3% frequent use, 57% in raised intensities, 34% in drawn out periods. An inverse relation between exposition time was verified and the band of age (p=0,000 and direct with the prevalence of the humming. Conclusion: Although to admit to have knowledge on the damages that the exposition the sound of high intensity can cause the hearing, the daily habits of the young evidence the inadequate use of the portable stereos characterized by long periods of exposition, raised intensities, frequent use and preference for the insertion phones. The high prevalence of symptoms after the use suggests a bigger risk for the hearing of these young.

  3. Passive Night Vision Sensor Comparison for Unmanned Ground Vehicle Stereo Vision Navigation

    Science.gov (United States)

    Owens, Ken; Matthies, Larry

    2000-01-01

    One goal of the "Demo III" unmanned ground vehicle program is to enable autonomous nighttime navigation at speeds of up to 10 m.p.h. To perform obstacle detection at night with stereo vision will require night vision cameras that produce adequate image quality for the driving speeds, vehicle dynamics, obstacle sizes, and scene conditions that will be encountered. This paper analyzes the suitability of four classes of night vision cameras (3-5 micrometer cooled FLIR, 8-12 micrometer cooled FLIR, 8-12 micrometer uncooled FLIR, and image intensifiers) for night stereo vision, using criteria based on stereo matching quality, image signal to noise ratio, motion blur and synchronization capability. We find that only cooled FLIRs will enable stereo vision performance that meets the goals of the Demo III program for nighttime autonomous mobility.

  4. Congruence analysis of point clouds from unstable stereo image sequences

    Directory of Open Access Journals (Sweden)

    C. Jepping

    2014-06-01

    Full Text Available This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis. For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  5. Operational modal analysis on a VAWT in a large wind tunnel using stereo vision technique

    DEFF Research Database (Denmark)

    Najafi, Nadia; Schmidt Paulsen, Uwe

    2017-01-01

    2 x 105. VAWT dynamics were simulated using HAWC2. The stereo vision results and HAWC2 simulations agree within 4% except for mode 3 and 4. The high aerodynamic damping of one of the blades, in flatwise motion, would explain the gap between those two modes from simulation and stereo vision. A set...... in picking very closely spaced modes. Finally, the uncertainty of the 3D displacement measurement was evaluated by applying a generalized method based on the law of error propagation, for a linear camera model of the stereo vision system....

  6. Stereoscopic camera design

    Science.gov (United States)

    Montgomery, David J.; Jones, Christopher K.; Stewart, James N.; Smith, Alan

    2002-05-01

    It is clear from the literature that the majority of work in stereoscopic imaging is directed towards the development of modern stereoscopic displays. As costs come down, wider public interest in this technology is expected to increase. This new technology would require new methods of image formation. Advances in stereo computer graphics will of course lead to the creation of new stereo computer games, graphics in films etc. However, the consumer would also like to see real-world stereoscopic images, pictures of family, holiday snaps etc. Such scenery would have wide ranges of depth to accommodate and would need also to cope with moving objects, such as cars, and in particular other people. Thus, the consumer acceptance of auto/stereoscopic displays and 3D in general would be greatly enhanced by the existence of a quality stereoscopic camera. This paper will cover an analysis of existing stereoscopic camera designs and show that they can be categorized into four different types, with inherent advantages and disadvantages. A recommendation is then made with regard to 3D consumer still and video photography. The paper will go on to discuss this recommendation and describe its advantages and how it can be realized in practice.

  7. The MVACS Robotic Arm Camera

    Science.gov (United States)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  8. Stereo optical guidance system for control of industrial robots

    Science.gov (United States)

    Powell, Bradley W. (Inventor); Rodgers, Mike H. (Inventor)

    1992-01-01

    A device for the generation of basic electrical signals which are supplied to a computerized processing complex for the operation of industrial robots. The system includes a stereo mirror arrangement for the projection of views from opposite sides of a visible indicia formed on a workpiece. The views are projected onto independent halves of the retina of a single camera. The camera retina is of the CCD (charge-coupled-device) type and is therefore capable of providing signals in response to the image projected thereupon. These signals are then processed for control of industrial robots or similar devices.

  9. A Self-Assessment Stereo Capture Model Applicable to the Internet of Things

    Science.gov (United States)

    Lin, Yancong; Yang, Jiachen; Lv, Zhihan; Wei, Wei; Song, Houbing

    2015-01-01

    The realization of the Internet of Things greatly depends on the information communication among physical terminal devices and informationalized platforms, such as smart sensors, embedded systems and intelligent networks. Playing an important role in information acquisition, sensors for stereo capture have gained extensive attention in various fields. In this paper, we concentrate on promoting such sensors in an intelligent system with self-assessment capability to deal with the distortion and impairment in long-distance shooting applications. The core design is the establishment of the objective evaluation criteria that can reliably predict shooting quality with different camera configurations. Two types of stereo capture systems—toed-in camera configuration and parallel camera configuration—are taken into consideration respectively. The experimental results show that the proposed evaluation criteria can effectively predict the visual perception of stereo capture quality for long-distance shooting. PMID:26308004

  10. Modeling and testing of geometric processing model based on double baselines stereo photogrammetric system

    Science.gov (United States)

    Li, Yingbo; Zhao, Sisi; Hu, Bin; Zhao, Haibo; He, Jinping; Zhao, Xuemin

    2017-10-01

    Aimed at key problems the system of 1:5000 scale space stereo mapping and the shortage of the surveying capability of urban area, in regard of the performance index and the surveying systems of the existing domestic optical mapping satellites are unable to meet the demand of the large scale stereo mapping, it is urgent to develop the very high accuracy space photogrammetric satellite system which has a 1:5000 scale (or larger).The new surveying systems of double baseline stereo photogrammetric mode with combined of linear array sensor and area array sensor was proposed, which aims at solving the problems of barriers, distortions and radiation differences in complex ground object mapping for the existing space stereo mapping technology. Based on collinearity equation, double baseline stereo photogrammetric method and the model of combined adjustment were presented, systematic error compensation for this model was analyzed, position precision of double baseline stereo photogrammetry based on both simulated images and images acquired under lab conditions was studied. The laboratory tests showed that camera geometric calibration accuracy is better than 1μm, the height positioning accuracy is better than 1.5GSD with GCPs. The results showed that the mode of combined of one linear array sensor and one plane array sensor had higher positioning precision. Explore the new system of 1:5000 scale very high accuracy space stereo mapping can provide available new technologies and strategies for achieving demotic very high accuracy space stereo mapping.

  11. Processing Earth Observing images with Ames Stereo Pipeline

    Science.gov (United States)

    Beyer, R. A.; Moratto, Z. M.; Alexandrov, O.; Fong, T.; Shean, D. E.; Smith, B. E.

    2013-12-01

    ICESat with its GLAS instrument provided valuable elevation measurements of glaciers. The loss of this spacecraft caused a demand for alternative elevation sources. In response to that, we have improved our Ames Stereo Pipeline (ASP) software (version 2.1+) to ingest satellite imagery from Earth satellite sources in addition to its support of planetary missions. This enables the open source community a free method to generate digital elevation models (DEM) from Digital Globe stereo imagery and alternatively other cameras using RPC camera models. Here we present details of the software. ASP is a collection of utilities written in C++ and Python that implement stereogrammetry. It contains utilities to manipulate DEMs, project imagery, create KML image quad-trees, and perform simplistic 3D rendering. However its primary application is the creation of DEMs. This is achieved by matching every pixel between the images of a stereo observation via a hierarchical coarse-to-fine template matching method. Matched pixels between images represent a single feature that is triangulated using each image's camera model. The collection of triangulated features represents a point cloud that is then grid resampled to create a DEM. In order for ASP to match pixels/features between images, it requires a search range defined in pixel units. Total processing time is proportional to the area of the first image being matched multiplied by the area of the search range. An incorrect search range for ASP causes repeated false positive matches at each level of the image pyramid and causes excessive processing times with no valid DEM output. Therefore our system contains automatic methods for deducing what the correct search range should be. In addition, we provide options for reducing the overall search range by applying affine epipolar rectification, homography transform, or by map projecting against a prior existing low resolution DEM. Depending on the size of the images, parallax, and image

  12. Wind-Sculpted Vicinity After Opportunity's Sol 1797 Drive (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11820 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11820 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 111 meters (364 feet) on the 1,797th Martian day, or sol, of Opportunity's surface mission (Feb. 12, 2009). North is at the center; south at both ends. This view is the right-eye member of a stereo pair presented as a cylindrical-perspective projection with geometric seam correction. Tracks from the drive recede northward across dark-toned sand ripples in the Meridiani Planum region of Mars. Patches of lighter-toned bedrock are visible on the left and right sides of the image. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). This view is presented as a cylindrical-perspective projection with geometric seam correction.

  13. Time for a Change; Spirit's View on Sol 1843 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11973 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11973 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, full-circle view of the rover's surroundings during the 1,843rd Martian day, or sol, of Spirit's surface mission (March 10, 2009). South is in the middle. North is at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 36 centimeters downhill earlier on Sol 1854, but had not been able to get free of ruts in soft material that had become an obstacle to getting around the northeastern corner of the low plateau called 'Home Plate.' The Sol 1854 drive, following two others in the preceding four sols that also achieved little progress in the soft ground, prompted the rover team to switch to a plan of getting around Home Plate counterclockwise, instead of clockwise. The drive direction in subsequent sols was westward past the northern edge of Home Plate. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  14. View Ahead After Spirit's Sol 1861 Drive (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11977 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11977 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images combined into this stereo, 210-degree view of the rover's surroundings during the 1,861st to 1,863rd Martian days, or sols, of Spirit's surface mission (March 28 to 30, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The center of the scene is toward the south-southwest. East is on the left. West-northwest is on the right. The rover had driven 22.7 meters (74 feet) southwestward on Sol 1861 before beginning to take the frames in this view. The drive brought Spirit past the northwestern corner of Home Plate. In this view, the western edge of Home Plate is on the portion of the horizon farthest to the left. A mound in middle distance near the center of the view is called 'Tsiolkovsky' and is about 40 meters (about 130 feet) from the rover's position. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  15. Dust Devil in Spirit's View Ahead on Sol 1854 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11960 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11960 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,854th Martian day, or sol, of Spirit's surface mission (March 21, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 13.79 meters (45 feet) westward earlier on Sol 1854. West is at the center, where a dust devil is visible in the distance. North on the right, where Husband Hill dominates the horizon; Spirit was on top of Husband Hill in September and October 2005. South is on the left, where lighter-toned rock lines the edge of the low plateau called 'Home Plate.' This view is presented as a cylindrical-perspective projection with geometric seam correction.

  16. Multiple Moving Obstacles Avoidance of Service Robot using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Achmad Jazidie

    2011-12-01

    Full Text Available In this paper, we propose a multiple moving obstacles avoidance using stereo vision for service robots in indoor environments. We assume that this model of service robot is used to deliver a cup to the recognized customer from the starting point to the destination. The contribution of this research is a new method for multiple moving obstacle avoidance with Bayesian approach using stereo camera. We have developed and introduced 3 main modules to recognize faces, to identify multiple moving obstacles and to maneuver of robot. A group of people who is walking will be tracked as a multiple moving obstacle, and the speed, direction, and distance of the moving obstacles is estimated by a stereo camera in order that the robot can maneuver to avoid the collision. To overcome the inaccuracies of vision sensor, Bayesian approach is used for estimate the absense and direction of obstacles. We present the results of the experiment of the service robot called Srikandi III which uses our proposed method and we also evaluate its performance. Experiments shown that our proposed method working well, and Bayesian approach proved increasing the estimation perform for absence and direction of moving obstacle.

  17. METHODS OF STEREO PAIR IMAGES FORMATION WITH A GIVEN PARALLAX VALUE

    Directory of Open Access Journals (Sweden)

    Viktoriya G. Chafonova

    2014-11-01

    Full Text Available Two new complementary methods of stereo pair images formation are proposed. The first method is based on finding the maximum correlation between the gradient images of the left and right frames. The second one implies the finding of the shift between two corresponding key points of images for a stereo pair found by a detector of point features. These methods give the possibility to set desired values of vertical and horizontal parallaxes for the selected object in the image. Application of these methods makes it possible to measure the parallax values for the objects on the final stereo pair in pixels and / or the percentage of the total image size. It gives the possibility to predict the possible excesses in parallax values while stereo pair printing or projection. The proposed methods are easily automated after object selection, for which a predetermined value of the horizontal parallax will be exposed. Stereo pair images superposition using the key points takes less than one second. The method with correlation application requires a little bit more computing time, but makes it possible to control and superpose undivided anaglyph image. The proposed methods of stereo pair formation can find their application in programs for editing and processing images of a stereo pair, in the monitoring devices for shooting cameras and in the devices for video sequence quality assessment

  18. Disparity Map Generation from Illumination Variant Stereo Images Using Efficient Hierarchical Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Viral H. Borisagar

    2014-01-01

    Full Text Available A novel hierarchical stereo matching algorithm is presented which gives disparity map as output from illumination variant stereo pair. Illumination difference between two stereo images can lead to undesirable output. Stereo image pair often experience illumination variations due to many factors like real and practical situation, spatially and temporally separated camera positions, environmental illumination fluctuation, and the change in the strength or position of the light sources. Window matching and dynamic programming techniques are employed for disparity map estimation. Good quality disparity map is obtained with the optimized path. Homomorphic filtering is used as a preprocessing step to lessen illumination variation between the stereo images. Anisotropic diffusion is used to refine disparity map to give high quality disparity map as a final output. The robust performance of the proposed approach is suitable for real life circumstances where there will be always illumination variation between the images. The matching is carried out in a sequence of images representing the same scene, however in different resolutions. The hierarchical approach adopted decreases the computation time of the stereo matching problem. This algorithm can be helpful in applications like robot navigation, extraction of information from aerial surveys, 3D scene reconstruction, and military and security applications. Similarity measure SAD is often sensitive to illumination variation. It produces unacceptable disparity map results for illumination variant left and right images. Experimental results show that our proposed algorithm produces quality disparity maps for both wide range of illumination variant and invariant stereo image pair.

  19. Towards A Real Time Implementation Of The Marr And Poggio Stereo Matcher

    Science.gov (United States)

    Nishihara, H. K.; Larson, N. G.

    1981-11-01

    This paper reports on research--primarily at Marr and Poggio's [9] mechanism level--to design a practical hardware stereo-matcher and on the interaction this study has had with our understanding of the problem, at the computational theory and algorithm levels. The stereo-matching algorithm proposed by Marr and Poggio [10] and implemented by Grimson and Marc [3] is consistent with what is presently known about human stereo vision [2]. Their research has been concerned with understanding the principles underlying the stereo-matching problem. Our objective has been to produce a stereo-matcher that operates reliably at near real time rates as a tool to facilitate further research in vision and for possible application in robotics and stereo-photogrammetry. At present the design and construction of the camera and convolution modules of this project have been completed and the design of the zero-crossing and matching modules is progressing. The remainder of this section provides a brief description of the Marr and Poggio stereo algorithm. We then dis-cuss our general approach and sonic of the issues that have come up concerning the design of the individual modules.

  20. Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras

    DEFF Research Database (Denmark)

    Kristoffersen, Miklas Strøm; Dueholm, Jacob Velling; Gade, Rikke

    2016-01-01

    for pedestrian counting based on clustering and tracking of the 3D point clouds. The method is tested on two five-minute video sequences captured at a public event with a moderate density of pedestrians and heavy occlusions. The counting performance is compared to the manually annotated ground truth and shows...

  1. An adaptive exposure algorithm for stereo imaging and its performance in an orchard

    DEFF Research Database (Denmark)

    García, Francisco; Wulfsohn, Dvoralai; Andersen, Jens Christian

    2010-01-01

    Stereo vision is being introduced in perception systems for autonomous agricultural vehicles. When working outdoors, light conditions change continuously. The perception system should be able to continuously adapt and correct camera exposure parameters to obtain the best interpretation of the scene...... practically possible. We describe the development and testing of an algorithm to update exposure parameter camera setting of a stereoscopic camera under dynamic light conditions. Static tests using a stereo camera were carried out in an orchard to determine how 2D image histograms and the 3D reconstruction...... change with exposure. An algorithm based on an "ideal mean pixel value" in the image was developed and implemented on the perception system of an automatic tractor. The system was tested in an orchard and found to perform satisfactorily under different orchard and light conditions....

  2. Stereo Information in Micromegas Detectors

    CERN Document Server

    The ATLAS collaboration

    2015-01-01

    The New Small Wheel of the ATLAS experiment layout foresees eight micromegas detection layers. Some of them will feature stereo strips designed to measure both the precision and the second coordinate. In this note we describe the principle of reconstructing a space point using stereo information obtained from two micromegas detector layers rotated by a known angle. Furthermore, an error analysis is carried out to correlate the precision and the second coordinate resolution with the corresponding rotated micromegas layers resolution. We analyze and examine two different cases in order to find the optimum layout for the muon spectrometer needs.

  3. Efficient Optical Flow and Stereo Vision for Velocity Estimation and Obstacle Avoidance on an Autonomous Pocket Drone

    OpenAIRE

    McGuire, K.N.; de Croon, G.C.H.E.; de Wagter, C.; Tuyls, Karl; Kappen, Hilbert

    2017-01-01

    Miniature Micro Aerial Vehicles (MAV) are very suitable for flying in indoor environments, but autonomous navigation is challenging due to their strict hardware limitations. This paper presents a highly efficient computer vision algorithm called Edge-FS for the determination of velocity and depth. It runs at 20 Hz on a 4 g stereo camera with an embedded STM32F4 microprocessor (168 MHz, 192 kB) and uses feature histograms to calculate optical flow and stereo disparity. The stereo-based distanc...

  4. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    Directory of Open Access Journals (Sweden)

    Chien-Lun Hou

    2011-02-01

    Full Text Available In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  5. Human Body Measurement by Robust Stereo-Matching

    Science.gov (United States)

    Kitamura, Kazuo; Kochi, Nobuo; Watanabe, Hiroto; Yamada, Mitsuharu; Kaneko, Shun'ichi

    We have developed a digital photogrammetry system, which enables us to make all-around 3D measurement using the object pictures obtained through the integrated works of digital cameras, projectors and our specific PC software. The system consists of units, each of which is again consists of two synchronized digital cameras and projectors with PC program. The measuring process is now simplified by making all the operations of orientation and measurement analysis automatic; and this, with a single push of a button. Automation of orientation was realized by creating a calibration box covered by color-coded targets, which enables us to make exterior orientation parameters of the cameras automatically. For stereo-matching, in order to minimize the mismatching, we invented Coarse-to Fine Strategy Method by integrating OCM (Orientation Code Matching) or robust matching with LSM (Least Square Matching). Therefore, improving the accuracy of the initial values of OCM, using LSM for fine measurement and making the involved operations automatic, we enhanced the quality of surface measurement and succeeded to shorten the measuring time considerably. In this paper we are presenting the report on our system of human body measurement and its processing method of stereo-matching, as well as on the result of comparing assessment of each matching method.

  6. Railway clearance intrusion detection method with binocular stereo vision

    Science.gov (United States)

    Zhou, Xingfang; Guo, Baoqing; Wei, Wei

    2018-03-01

    In the stage of railway construction and operation, objects intruding railway clearance greatly threaten the safety of railway operation. Real-time intrusion detection is of great importance. For the shortcomings of depth insensitive and shadow interference of single image method, an intrusion detection method with binocular stereo vision is proposed to reconstruct the 3D scene for locating the objects and judging clearance intrusion. The binocular cameras are calibrated with Zhang Zhengyou's method. In order to improve the 3D reconstruction speed, a suspicious region is firstly determined by background difference method of a single camera's image sequences. The image rectification, stereo matching and 3D reconstruction process are only executed when there is a suspicious region. A transformation matrix from Camera Coordinate System(CCS) to Track Coordinate System(TCS) is computed with gauge constant and used to transfer the 3D point clouds into the TCS, then the 3D point clouds are used to calculate the object position and intrusion in TCS. The experiments in railway scene show that the position precision is better than 10mm. It is an effective way for clearance intrusion detection and can satisfy the requirement of railway application.

  7. Hearing damage by personal stereo

    DEFF Research Database (Denmark)

    Hammershøi, Dorte; Ordoñez, Rodrigo Pizarro; Reuter, Karen

    2006-01-01

    The technological development within personal stereo systems, such as MP3 players, iPods etc., has changed music listening habits from home entertainment to everyday and everywhere use. The technology has developed considerably, since the introduction of CD walkmen, and high-level low-distortion ......The technological development within personal stereo systems, such as MP3 players, iPods etc., has changed music listening habits from home entertainment to everyday and everywhere use. The technology has developed considerably, since the introduction of CD walkmen, and high-level low......-distortion music is produced by minimal devices. In this paper, the existing literature on effects of personal stereo systems is reviewed, incl. studies of exposure levels, and effects on hearing. Generally, it is found that the levels being used is of concern, which in one study [Acustica?Acta Acustica, 82 (1996......) 885?894] is demonstrated to relate to the specific use in situations with high levels of background noise. Another study [Med. J. Austr., 1998; 169: 588-592], demonstrates that the effect of personal stereo is comparable to that of being exposed to noise in industry. The results are discussed in view...

  8. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  9. An optical, electrical and ultrasonic layered single sensor for ingredient measurement in liquid

    International Nuclear Information System (INIS)

    Kimoto, A; Kitajima, T

    2010-01-01

    In this paper, an optical, electrical and ultrasonic layered single sensor is proposed as a new, non-invasive sensing method for the measurement of ingredients in liquid, particularly in the food industry. In the proposed sensor, the photo sensors and the PVDF films with the transparent conductive electrode are layered and the optical properties of the liquid are measured by a light emitting diode (LED) and a phototransistor (PT). In addition, the electrical properties are measured by indium tin oxide (ITO) film electrodes as the transparent conductive electrodes of PVDF films arranged on the surfaces of the LED and PT. Moreover, the ultrasonic properties are measured by PVDF films. Thus, the optical, electrical and ultrasonic properties in the same space of the liquid can be simultaneously measured at a single sensor. To test the sensor experimentally, three parameters of the liquid—such as concentrations of yellow color, sodium chloride (NaCl) and ethanol in distilled water—were estimated using the measurement values of the optical, electrical and ultrasonic properties obtained with the proposed sensor. The results suggested that it is possible to estimate the three ingredient concentrations in the same space of the liquid from the optical, electrical and ultrasonic properties measured by the proposed single sensor, although there are still some problems such as measurement accuracy that must be solved

  10. Multi-Level Wavelet Shannon Entropy-Based Method for Single-Sensor Fault Location

    Directory of Open Access Journals (Sweden)

    Qiaoning Yang

    2015-10-01

    Full Text Available In actual application, sensors are prone to failure because of harsh environments, battery drain, and sensor aging. Sensor fault location is an important step for follow-up sensor fault detection. In this paper, two new multi-level wavelet Shannon entropies (multi-level wavelet time Shannon entropy and multi-level wavelet time-energy Shannon entropy are defined. They take full advantage of sensor fault frequency distribution and energy distribution across multi-subband in wavelet domain. Based on the multi-level wavelet Shannon entropy, a method is proposed for single sensor fault location. The method firstly uses a criterion of maximum energy-to-Shannon entropy ratio to select the appropriate wavelet base for signal analysis. Then multi-level wavelet time Shannon entropy and multi-level wavelet time-energy Shannon entropy are used to locate the fault. The method is validated using practical chemical gas concentration data from a gas sensor array. Compared with wavelet time Shannon entropy and wavelet energy Shannon entropy, the experimental results demonstrate that the proposed method can achieve accurate location of a single sensor fault and has good anti-noise ability. The proposed method is feasible and effective for single-sensor fault location.

  11. A single sensor and single actuator approach to performance tailoring over a prescribed frequency band.

    Science.gov (United States)

    Wang, Jiqiang

    2016-03-01

    Restricted sensing and actuation control represents an important area of research that has been overlooked in most of the design methodologies. In many practical control engineering problems, it is necessitated to implement the design through a single sensor and single actuator for multivariate performance variables. In this paper, a novel approach is proposed for the solution to the single sensor and single actuator control problem where performance over any prescribed frequency band can also be tailored. The results are obtained for the broad band control design based on the formulation for discrete frequency control. It is shown that the single sensor and single actuator control problem over a frequency band can be cast into a Nevanlinna-Pick interpolation problem. An optimal controller can then be obtained via the convex optimization over LMIs. Even remarkable is that robustness issues can also be tackled in this framework. A numerical example is provided for the broad band attenuation of rotor blade vibration to illustrate the proposed design procedures. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Application of stereo photogrammetric techniques for measuring African Elephants

    Directory of Open Access Journals (Sweden)

    A. J Hall-Martin

    1979-12-01

    Full Text Available Measurements of shoulder height and back length of African elephants were obtained by means of stereo photogrammetric techniques. A pair of Zeiss UMK 10/1318 cameras, mounted on a steel frame on the back of a vehicle, were used to photograph the elephants in the Addo Elephant National Park, Republic of South Africa. Several modifications of normal photogrammetry procedure applicable to the field situation (eg. control points and the computation of results (eg. relative orientation are briefly mentioned. Six elephants were immobilised after being photographed and the measurements obtained from them agreed within a range of 1 cm-10 cm with the photogrammetric measurements.

  13. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  14. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  15. Cooperative and asynchronous stereo vision for dynamic vision sensors

    Science.gov (United States)

    Piatkowska, E.; Belbachir, A. N.; Gelautz, M.

    2014-05-01

    Dynamic vision sensors (DVSs) encode visual input as a stream of events generated upon relative light intensity changes in the scene. These sensors have the advantage of allowing simultaneously high temporal resolution (better than 10 µs) and wide dynamic range (>120 dB) at sparse data representation, which is not possible with clocked vision sensors. In this paper, we focus on the task of stereo reconstruction. The spatiotemporal and asynchronous aspects of data provided by the sensor impose a different stereo reconstruction approach from the one applied for synchronous frame-based cameras. We propose to model the event-driven stereo matching by a cooperative network (Marr and Poggio 1976 Science 194 283-7). The history of the recent activity in the scene is stored in the network, which serves as spatiotemporal context used in disparity calculation for each incoming event. The network constantly evolves in time, as events are generated. In our work, not only the spatiotemporal aspect of the data is preserved but also the matching is performed asynchronously. The results of the experiments prove that the proposed approach is well adapted for DVS data and can be successfully used for disparity calculation.

  16. System Design, Calibration and Performance Analysis of a Novel 360° Stereo Panoramic Mobile Mapping System

    Science.gov (United States)

    Blaser, S.; Nebiker, S.; Cavegn, S.

    2017-05-01

    Image-based mobile mapping systems enable the efficient acquisition of georeferenced image sequences, which can later be exploited in cloud-based 3D geoinformation services. In order to provide a 360° coverage with accurate 3D measuring capabilities, we present a novel 360° stereo panoramic camera configuration. By using two 360° panorama cameras tilted forward and backward in combination with conventional forward and backward looking stereo camera systems, we achieve a full 360° multi-stereo coverage. We furthermore developed a fully operational new mobile mapping system based on our proposed approach, which fulfils our high accuracy requirements. We successfully implemented a rigorous sensor and system calibration procedure, which allows calibrating all stereo systems with a superior accuracy compared to that of previous work. Our study delivered absolute 3D point accuracies in the range of 4 to 6 cm and relative accuracies of 3D distances in the range of 1 to 3 cm. These results were achieved in a challenging urban area. Furthermore, we automatically reconstructed a 3D city model of our study area by employing all captured and georeferenced mobile mapping imagery. The result is a very high detailed and almost complete 3D city model of the street environment.

  17. On-line measurement of ski-jumper trajectory: combining stereo vision and shape description

    Science.gov (United States)

    Nunner, T.; Sidla, O.; Paar, G.; Nauschnegg, B.

    2010-01-01

    Ski jumping has continuously raised major public interest since the early 70s of the last century, mainly in Europe and Japan. The sport undergoes high-level analysis and development, among others, based on biodynamic measurements during the take-off and flight phase of the jumper. We report on a vision-based solution for such measurements that provides a full 3D trajectory of unique points on the jumper's shape. During the jump synchronized stereo images are taken by a calibrated camera system in video rate. Using methods stemming from video surveillance, the jumper is detected and localized in the individual stereo images, and learning-based deformable shape analysis identifies the jumper's silhouette. The 3D reconstruction of the trajectory takes place on standard stereo forward intersection of distinct shape points, such as helmet top or heel. In the reported study, the measurements are being verified by an independent GPS measurement mounted on top of the Jumper's helmet, synchronized to the timing of camera exposures. Preliminary estimations report an accuracy of +/-20 cm in 30 Hz imaging frequency within 40m trajectory. The system is ready for fully-automatic on-line application on ski-jumping sites that allow stereo camera views with an approximate base-distance ratio of 1:3 within the entire area of investigation.

  18. BIFOCAL STEREO FOR MULTIPATH PERSON RE-IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    G. Blott

    2017-11-01

    Full Text Available This work presents an approach for the task of person re-identification by exploiting bifocal stereo cameras. Present monocular person re-identification approaches show a decreasing working distance, when increasing the image resolution to obtain a higher reidentification performance. We propose a novel 3D multipath bifocal approach, containing a rectilinear lens with larger focal length for long range distances and a fish eye lens of a smaller focal length for the near range. The person re-identification performance is at least on par with 2D re-identification approaches but the working distance of the approach is increased and on average 10% more re-identification performance can be achieved in the overlapping field of view compared to a single camera. In addition, the 3D information is exploited from the overlapping field of view to solve potential 2D ambiguities.

  19. Visual tracking in stereo. [by computer vision system

    Science.gov (United States)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  20. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  1. New Record Five-Wheel Drive, Spirit's Sol 1856 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11962 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11962 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,856th Martian day, or sol, of Spirit's surface mission (March 23, 2009). The center of the view is toward the west-southwest. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 25.82 meters (84.7 feet) west-northwestward earlier on Sol 1856. This is the longest drive on Mars so far by a rover using only five wheels. Spirit lost the use of its right-front wheel in March 2006. Before Sol 1856, the farthest Spirit had covered in a single sol's five-wheel drive was 24.83 meters (81.5 feet), on Sol 1363 (Nov. 3, 2007). The Sol 1856 drive made progress on a route planned for taking Spirit around the western side of the low plateau called 'Home Plate.' A portion of the northwestern edge of Home Plate is prominent in the left quarter of this image, toward the south. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  2. Spirit's View Beside 'Home Plate' on Sol 1823 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11971 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11971 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,823rd Martian day, or sol, of Spirit's surface mission (Feb. 17, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The center of the view is toward the south-southwest. The rover had driven 7 meters (23 feet) eastward earlier on Sol 1823, part of maneuvering to get Spirit into a favorable position for climbing onto the low plateau called 'Home Plate.' However, after two driving attempts with negligible progress during the following three sols, the rover team changed its strategy for getting to destinations south of Home Plate. The team decided to drive Spirit at least partway around Home Plate, instead of ascending the northern edge and taking a shorter route across the top of the plateau. Layered rocks forming part of the northern edge of Home Plate can be seen near the center of the image. Rover wheel tracks are visible at the lower edge. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  3. Opportunity's View After Drive on Sol 1806 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11816 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11816 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 60.86 meters (200 feet) on the 1,806th Martian day, or sol, of Opportunity's surface mission (Feb. 21, 2009). North is at the center; south at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). Engineers designed the Sol 1806 drive to be driven backwards as a strategy to redistribute lubricant in the rovers wheels. The right-front wheel had been showing signs of increased friction. The rover's position after the Sol 1806 drive was about 2 kilometer (1.2 miles) south southwest of Victoria Crater. Cumulative odometry was 14.74 kilometers (9.16 miles) since landing in January 2004, including 2.96 kilometers (1.84 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008). This view is presented as a cylindrical-perspective projection with geometric seam correction.

  4. Opportunity's View After Long Drive on Sol 1770 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11791 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11791 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 104 meters (341 feet) on the 1,770th Martian day, or sol, of Opportunity's surface mission (January 15, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). Prior to the Sol 1770 drive, Opportunity had driven less than a meter since Sol 1713 (November 17, 2008), while it used the tools on its robotic arm first to examine a meteorite called 'Santorini' during weeks of restricted communication while the sun was nearly in line between Mars and Earth, then to examine bedrock and soil targets near Santorini. The rover's position after the Sol 1770 drive was about 1.1 kilometer (two-thirds of a mile) south southwest of Victoria Crater. Cumulative odometry was 13.72 kilometers (8.53 miles) since landing in January 2004, including 1.94 kilometers (1.21 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008). This view is presented as a cylindrical-perspective projection with geometric seam correction.

  5. Development of a hybrid earthquake early warning system based on single sensor technique

    International Nuclear Information System (INIS)

    Gravirov, V.V.; Kislov, K.V.

    2012-01-01

    There are two methods to earthquake early warning system: the method based on a network of seismic stations and the single-sensor method. Both have advantages and drawbacks. The current systems rely on high density seismic networks. Attempts at implementing techniques based on the single-station principle encounter difficulties in the identification of earthquake in noise. The noise may be very diverse, from stationary to impulsive. It seems a promising line of research to develop hybrid warning systems with single-sensors being incorporated in the overall early warning network. This will permit using all advantages and will help reduce the radius of the hazardous zone where no earthquake warning can be produced. The main problems are highlighted and the solutions of these are discussed. The system is implemented to include three detection processes in parallel. The first is based on the study of the co-occurrence matrix of the signal wavelet transform. The second consists in using the method of a change point in a random process and signal detection in a moving time window. The third uses artificial neural networks. Further, applying a decision rule out the final earthquake detection is carried out and estimate its reliability. (author)

  6. Creating a distortion characterisation dataset for visual band cameras using fiducial markers

    CSIR Research Space (South Africa)

    Jermy, R

    2015-11-01

    Full Text Available . This will allow other researchers to perform the same steps and create better algorithms to accurately locate fiducial markers and calibrate cameras. A second dataset that can be used to assess the accuracy of the stereo vision of two calibrated cameras is also...

  7. Towards a miniaturized photon counting laser altimeter and stereoscopic camera instrument suite for microsatellites

    NARCIS (Netherlands)

    Moon, S.G.; Hannemann, S.; Collon, M.; Wielinga, K.; Kroesbergen, E.; Harris, J.; Gill, E.K.A.; Maessen, D.C.

    2009-01-01

    In the following we review the optimization for microsatellite deployment of a highly integrated payload suite comprising a high resolution camera, an additional camera for stereoscopic imaging, and a single photon counting laser altimeter. This payload suite, the `Stereo Imaging Laser Altimeter'

  8. WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves

    Science.gov (United States)

    Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise

    2017-10-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.

  9. WASS: an open-source stereo processing pipeline for sea waves 3D reconstruction

    Science.gov (United States)

    Bergamasco, Filippo; Benetazzo, Alvise; Torsello, Andrea; Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro

    2017-04-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community. In fact, recent advances of both computer vision algorithms and CPU processing power can now allow the study of the spatio-temporal wave fields with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner so that the implementation of a 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well-tested software package that automates the steps from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS, a completely Open-Source stereo processing pipeline for sea waves 3D reconstruction, available at http://www.dais.unive.it/wass/. Our tool completely automates the recovery of dense point clouds from stereo images by providing three main functionalities. First, WASS can automatically recover the extrinsic parameters of the stereo rig (up to scale) so that no delicate calibration has to be performed on the field. Second, WASS implements a fast 3D dense stereo reconstruction procedure so that an accurate 3D point cloud can be computed from each stereo pair. We rely on the well-consolidated OpenCV library both for the image stereo rectification and disparity map recovery. Lastly, a set of 2D and 3D filtering techniques both on the disparity map and the produced point cloud are implemented to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface (examples are sun-glares, large white-capped areas, fog and water areosol, etc). Developed to be as fast as possible, WASS

  10. Stereo Correspondence Using Moment Invariants

    Science.gov (United States)

    Premaratne, Prashan; Safaei, Farzad

    Autonomous navigation is seen as a vital tool in harnessing the enormous potential of Unmanned Aerial Vehicles (UAV) and small robotic vehicles for both military and civilian use. Even though, laser based scanning solutions for Simultaneous Location And Mapping (SLAM) is considered as the most reliable for depth estimation, they are not feasible for use in UAV and land-based small vehicles due to their physical size and weight. Stereovision is considered as the best approach for any autonomous navigation solution as stereo rigs are considered to be lightweight and inexpensive. However, stereoscopy which estimates the depth information through pairs of stereo images can still be computationally expensive and unreliable. This is mainly due to some of the algorithms used in successful stereovision solutions require high computational requirements that cannot be met by small robotic vehicles. In our research, we implement a feature-based stereovision solution using moment invariants as a metric to find corresponding regions in image pairs that will reduce the computational complexity and improve the accuracy of the disparity measures that will be significant for the use in UAVs and in small robotic vehicles.

  11. Improved Stereo Matching With Boosting Method

    Directory of Open Access Journals (Sweden)

    Shiny B

    2015-06-01

    Full Text Available Abstract This paper presents an approach based on classification for improving the accuracy of stereo matching methods. We propose this method for occlusion handling. This work employs classification of pixels for finding the erroneous disparity values. Due to the wide applications of disparity map in 3D television medical imaging etc the accuracy of disparity map has high significance. An initial disparity map is obtained using local or global stereo matching methods from the input stereo image pair. The various features for classification are computed from the input stereo image pair and the obtained disparity map. Then the computed feature vector is used for classification of pixels by using GentleBoost as the classification method. The erroneous disparity values in the disparity map found by classification are corrected through a completion stage or filling stage. A performance evaluation of stereo matching using AdaBoostM1 RUSBoost Neural networks and GentleBoost is performed.

  12. A multi-modal stereo microscope based on a spatial light modulator.

    Science.gov (United States)

    Lee, M P; Gibson, G M; Bowman, R; Bernet, S; Ritsch-Marte, M; Phillips, D B; Padgett, M J

    2013-07-15

    Spatial Light Modulators (SLMs) can emulate the classic microscopy techniques, including differential interference (DIC) contrast and (spiral) phase contrast. Their programmability entails the benefit of flexibility or the option to multiplex images, for single-shot quantitative imaging or for simultaneous multi-plane imaging (depth-of-field multiplexing). We report the development of a microscope sharing many of the previously demonstrated capabilities, within a holographic implementation of a stereo microscope. Furthermore, we use the SLM to combine stereo microscopy with a refocusing filter and with a darkfield filter. The instrument is built around a custom inverted microscope and equipped with an SLM which gives various imaging modes laterally displaced on the same camera chip. In addition, there is a wide angle camera for visualisation of a larger region of the sample.

  13. Imaging Techniques for Dense 3D reconstruction of Swimming Aquatic Life using Multi-view Stereo

    Science.gov (United States)

    Daily, David; Kiser, Jillian; McQueen, Sarah

    2016-11-01

    Understanding the movement characteristics of how various species of fish swim is an important step to uncovering how they propel themselves through the water. Previous methods have focused on profile capture methods or sparse 3D manual feature point tracking. This research uses an array of 30 cameras to automatically track hundreds of points on a fish as they swim in 3D using multi-view stereo. Blacktip sharks, sting rays, puffer fish, turtles and more were imaged in collaboration with the National Aquarium in Baltimore, Maryland using the multi-view stereo technique. The processes for data collection, camera synchronization, feature point extraction, 3D reconstruction, 3D alignment, biological considerations, and lessons learned will be presented. Preliminary results of the 3D reconstructions will be shown and future research into mathematically characterizing various bio-locomotive maneuvers will be discussed.

  14. A Real-Time Embedded System for Stereo Vision Preprocessing Using an FPGA

    DEFF Research Database (Denmark)

    Kjær-Nielsen, Anders; Jensen, Lars Baunegaard With; Sørensen, Anders Stengaard

    2008-01-01

    In this paper a low level vision processing node for use in existing IEEE 1394 camera setups is presented. The processing node is a small embedded system, that utilizes an FPGA to perform stereo vision preprocessing at rates limited by the bandwidth of IEEE 1394a (400Mbit). The system is used...... extraction, and undistortion and rectification. The latency of the system when running at 2x15fps is 30ms....

  15. Optimization of Single-Sensor Two-State Hot-Wire Anemometer Transmission Bandwidth.

    Science.gov (United States)

    Ligęza, Paweł

    2008-10-28

    Hot-wire anemometric measurements of non-isothermal flows require the use of thermal compensation or correction circuitry. One possible solution is a two-state hot-wire anemometer that uses the cyclically changing heating level of a single sensor. The area in which flow velocity and fluid temperature can be measured is limited by the dimensions of the sensor's active element. The system is designed to measure flows characterized by high velocity and temperature gradients, although its transmission bandwidth is very limited. In this study, we propose a method to optimize the two-state hot-wire anemometer transmission bandwidth. The method is based on the use of a specialized constanttemperature system together with variable dynamic parameters. It is also based on a suitable measurement cycle paradigm. Analysis of the method was undertaken using model testing. Our results reveal a possible significant broadening of the two-state hot-wire anemometer's transmission bandwidth.

  16. Gamma camera

    International Nuclear Information System (INIS)

    Reiss, K.H.; Kotschak, O.; Conrad, B.

    1976-01-01

    A gamma camera with a simplified setup as compared with the state of engineering is described permitting, apart from good localization, also energy discrimination. Behind the usual vacuum image amplifier a multiwire proportional chamber filled with trifluorine bromium methane is connected in series. Localizing of the signals is achieved by a delay line, energy determination by means of a pulse height discriminator. With the aid of drawings and circuit diagrams, the setup and mode of operation are explained. (ORU) [de

  17. PHOTOMETRIC STEREO SHAPE-AND-ALBEDO-FROM-SHADING FOR PIXEL-LEVEL RESOLUTION LUNAR SURFACE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    W. C. Liu

    2017-07-01

    Full Text Available Shape and Albedo from Shading (SAfS techniques recover pixel-wise surface details based on the relationship between terrain slopes, illumination and imaging geometry, and the energy response (i.e., image intensity captured by the sensing system. Multiple images with different illumination geometries (i.e., photometric stereo can provide better SAfS surface reconstruction due to the increase in observations. Photometric stereo SAfS is suitable for detailed surface reconstruction of the Moon and other extra-terrestrial bodies due to the availability of photometric stereo and the less complex surface reflecting properties (i.e., albedo of the target bodies as compared to the Earth. Considering only one photometric stereo pair (i.e., two images, pixel-variant albedo is still a major obstacle to satisfactory reconstruction and it needs to be regulated by the SAfS algorithm. The illumination directional difference between the two images also becomes an important factor affecting the reconstruction quality. This paper presents a photometric stereo SAfS algorithm for pixel-level resolution lunar surface reconstruction. The algorithm includes a hierarchical optimization architecture for handling pixel-variant albedo and improving performance. With the use of Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC photometric stereo images, the reconstructed topography (i.e., the DEM is compared with the DEM produced independently by photogrammetric methods. This paper also addresses the effect of illumination directional difference in between one photometric stereo pair on the reconstruction quality of the proposed algorithm by both mathematical and experimental analysis. In this case, LROC NAC images under multiple illumination directions are utilized by the proposed algorithm for experimental comparison. The mathematical derivation suggests an illumination azimuthal difference of 90 degrees between two images is recommended to achieve

  18. Photometric Stereo Shape-And for Pixel-Level Resolution Lunar Surface Reconstruction

    Science.gov (United States)

    Liu, W. C.; Wu, B.

    2017-07-01

    Shape and Albedo from Shading (SAfS) techniques recover pixel-wise surface details based on the relationship between terrain slopes, illumination and imaging geometry, and the energy response (i.e., image intensity) captured by the sensing system. Multiple images with different illumination geometries (i.e., photometric stereo) can provide better SAfS surface reconstruction due to the increase in observations. Photometric stereo SAfS is suitable for detailed surface reconstruction of the Moon and other extra-terrestrial bodies due to the availability of photometric stereo and the less complex surface reflecting properties (i.e., albedo) of the target bodies as compared to the Earth. Considering only one photometric stereo pair (i.e., two images), pixel-variant albedo is still a major obstacle to satisfactory reconstruction and it needs to be regulated by the SAfS algorithm. The illumination directional difference between the two images also becomes an important factor affecting the reconstruction quality. This paper presents a photometric stereo SAfS algorithm for pixel-level resolution lunar surface reconstruction. The algorithm includes a hierarchical optimization architecture for handling pixel-variant albedo and improving performance. With the use of Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC) photometric stereo images, the reconstructed topography (i.e., the DEM) is compared with the DEM produced independently by photogrammetric methods. This paper also addresses the effect of illumination directional difference in between one photometric stereo pair on the reconstruction quality of the proposed algorithm by both mathematical and experimental analysis. In this case, LROC NAC images under multiple illumination directions are utilized by the proposed algorithm for experimental comparison. The mathematical derivation suggests an illumination azimuthal difference of 90 degrees between two images is recommended to achieve minimal error in

  19. Obstacles facing the venus radar mapper - The implications of gestalt formation in stereo-radargrammetry

    Science.gov (United States)

    Wildey, R.L.

    1986-01-01

    The question of adapting to radar images the existing hardware that form topographic maps through stereo-photogrammetric models, is examined in principle. Such hardware utilizes a human/ computer hybrid. Although the problem of brightness differentials between corresponding landmarks can be dealt with pseudo-photoclinometrically, the main problem is whether the perspective in a radar image can be conceived to mimic that of a photographic image obtained by a suitably positioned camera. This conception is found to be possible, providing the characteristic relief subtends to a very small angle at the radar and at the fictitious camera. The photogrammetric model parameters must be determined a priori. ?? 1986 D. Reidel Publishing Company.

  20. Stereo Visualization and Map Comprehension

    Science.gov (United States)

    Rapp, D. N.; Culpepper, S.; Kirkby, K.; Morin, P.

    2004-12-01

    In this experiment, we assessed the use of stereo visualizations as effective tools for topographic map learning. In most Earth Science courses, students spend extended time learning how to read topographic maps, relying on the lines of the map as indicators of height and accompanying distance. These maps often necessitate extended training for students to acquire an understanding of what they represent, how they are to be used, and the implementation of these maps to solve problems. In fact instructors often comment that students fail to adequately use such maps, instead relying on prior spatial knowledge or experiences which may be inappropriate for understanding topographic displays. We asked participants to study maps that provided 3-dimensional or 2-dimensional views, and then answer a battery of questions about features and processes associated with the maps. The results will be described with respect to the cognitive utility of visualizations as tools for map comprehension tasks.

  1. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellut, Paolo; Sherwin, Gary

    2011-01-01

    TIR cameras can be used for day/night Unmanned Ground Vehicle (UGV) autonomous navigation when stealth is required. The quality of uncooled TIR cameras has significantly improved over the last decade, making them a viable option at low speed Limiting factors for stereo ranging with uncooled LWIR cameras are image blur and low texture scenes TIR perception capabilities JPL has explored includes: (1) single and dual band TIR terrain classification (2) obstacle detection (pedestrian, vehicle, tree trunks, ditches, and water) (3) perception thru obscurants

  2. Video stereo-laparoscopy system

    Science.gov (United States)

    Xiang, Yang; Hu, Jiasheng; Jiang, Huilin

    2006-01-01

    Minimally invasive surgery (MIS) has contributed significantly to patient care by reducing the morbidity associated with more invasive procedures. MIS procedures have become standard treatment for gallbladder disease and some abdominal malignancies. The imaging system has played a major role in the evolving field of minimally invasive surgery (MIS). The image need to have good resolution, large magnification, especially, the image need to have depth cue at the same time the image have no flicker and suit brightness. The video stereo-laparoscopy system can meet the demand of the doctors. This paper introduces the 3d video laparoscopy has those characteristic, field frequency: 100Hz, the depth space: 150mm, resolution: 10pl/mm. The work principle of the system is introduced in detail, and the optical system and time-division stereo-display system are described briefly in this paper. The system has focusing image lens, it can image on the CCD chip, the optical signal can change the video signal, and through A/D switch of the image processing system become the digital signal, then display the polarized image on the screen of the monitor through the liquid crystal shutters. The doctors with the polarized glasses can watch the 3D image without flicker of the tissue or organ. The 3D video laparoscope system has apply in the MIS field and praised by doctors. Contrast to the traditional 2D video laparoscopy system, it has some merit such as reducing the time of surgery, reducing the problem of surgery and the trained time.

  3. MOBILE STEREO-MAPPER: A PORTABLE KIT FOR UNMANNED AERIAL VEHICLES

    Directory of Open Access Journals (Sweden)

    J. Li-Chee-Ming

    2012-09-01

    Full Text Available A low-cost portable light-weight mobile stereo-mapping system (MSMS is under development in the GeoICT Lab, Geomatics Engineering program at York University. The MSMS is designed for remote operation on board unmanned aerial vehicles (UAV for navigation and rapid collection of 3D spatial data. Pose estimation of the camera sensors is based on single frequency RTK-GPS, loosely coupled in a Kalman filter with MEMS-based IMU. The attitude and heading reference system (AHRS calculates orientation from the gyro data, aided by accelerometer and magnetometer data to compensate for gyro drift. Two low-cost consumer digital cameras are calibrated and time-synchronized with the GPS/IMU to provide direct georeferenced stereo vision, while a video camera is used for navigation. Object coordinates are determined using rigorous photogrammetric solutions supported by direct georefencing algorithms for accurate pose estimation of the camera sensors. Before the MSMS is considered operational its sensor components and the integrated system itself has to undergo a rigorous calibration process to determine systematic errors and biases and to determine the relative geometry of the sensors. In this paper, the methods and results for system calibration, including camera, boresight and leverarm calibrations are presented. An overall accuracy assessment of the calibrated system is given using a 3D test field.

  4. Stereo vision with distance and gradient recognition

    Science.gov (United States)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  5. Gamma camera

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    The light pulse output of a scintillator, on which incident collimated gamma rays impinge, is detected by an array of photoelectric tubes each having a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to outputs of the phototubes develops the scintillation event position coordinate electrical signals with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as is possible to obtain less distortion in the field of view and improved spatial resolution as compared to conventional planar photocathode gamma cameras

  6. Rover's Wheel Churns Up Bright Martian Soil (Stereo)

    Science.gov (United States)

    2009-01-01

    NASA's Mars Exploration Rover Spirit acquired this mosaic on the mission's 1,202nd Martian day, or sol (May 21, 2007), while investigating the area east of the elevated plateau known as 'Home Plate' in the 'Columbia Hills.' The mosaic shows an area of disturbed soil, nicknamed 'Gertrude Weise' by scientists, made by Spirit's stuck right front wheel. The trench exposed a patch of nearly pure silica, with the composition of opal. It could have come from either a hot-spring environment or an environment called a fumarole, in which acidic, volcanic steam rises through cracks. Either way, its formation involved water, and on Earth, both of these types of settings teem with microbial life. Multiple images taken with Spirit's panoramic camera are combined here into a stereo view that appears three-dimensional when seen through red-blue glasses, with the red lens on the left.

  7. Analysis of Camera Arrays Applicable to the Internet of Things.

    Science.gov (United States)

    Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing

    2016-03-22

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.

  8. Advanced scanning transmission stereo electron microscopy of structural and functional engineering materials

    International Nuclear Information System (INIS)

    Agudo Jácome, L.; Eggeler, G.; Dlouhý, A.

    2012-01-01

    Stereo transmission electron microscopy (TEM) provides a 3D impression of the microstructure in a thin TEM foil. It allows to perform depth and TEM foil thickness measurements and to decide whether a microstructural feature lies inside of a thin foil or on its surface. It allows appreciating the true three-dimensional nature of dislocation configurations. In the present study we first review some basic elements of classical stereo TEM. We then show how the method can be extended by working in the scanning transmission electron microscope (STEM) mode of a modern analytical 200 kV TEM equipped with a field emission gun (FEG TEM) and a high angle annular dark field (HAADF) detector. We combine two micrographs of a stereo pair into one anaglyph. When viewed with special colored glasses the anaglyph provides a direct and realistic 3D impression of the microstructure. Three examples are provided which demonstrate the potential of this extended stereo TEM technique: a single crystal Ni-base superalloy, a 9% Chromium tempered martensite ferritic steel and a NiTi shape memory alloy. We consider the effect of camera length, show how foil thicknesses can be measured, and discuss the depth of focus and surface effects. -- Highlights: ► The advanced STEM/HAADF diffraction contrast is extended to 3D stereo-imaging. ► The advantages of the new technique over stereo-imaging in CTEM are demonstrated. ► The new method allows foil thickness measurements in a broad range of conditions. ► We show that features associated with ion milling surface damage can be beneficial for appreciating 3D features of the microstructure.

  9. Artificial stereo presentation of meteorological data fields

    Science.gov (United States)

    Hasler, A. F.; Desjardins, M.; Negri, A. J.

    1981-01-01

    The innate capability to perceive three-dimensional stereo imagery has been exploited to present multidimensional meteorological data fields. Variations on an artificial stereo technique first discussed by Pichel et al. (1973) are used to display single and multispectral images in a vivid and easily assimilated manner. Examples of visible/infrared artificial stereo are given for Hurricane Allen and for severe thunderstorms on 10 April 1979. Three-dimensional output from a mesoscale model also is presented. The images may be viewed through the glasses inserted in the February 1981 issue of the Bulletin of the American Meteorological Society, with the red lens over the right eye. The images have been produced on the interactive Atmospheric and Oceanographic Information Processing System (AOIPS) at Goddard Space Flight Center. Stereo presentation is an important aid in understanding meteorological phenomena for operational weather forecasting, research case studies, and model simulations.

  10. Ames Stereo Pipeline for Operation IceBridge

    Science.gov (United States)

    Beyer, R. A.; Alexandrov, O.; McMichael, S.; Fong, T.

    2017-12-01

    We are using the NASA Ames Stereo Pipeline to process Operation IceBridge Digital Mapping System (DMS) images into terrain models and to align them with the simultaneously acquired LIDAR data (ATM and LVIS). The expected outcome is to create a contiguous, high resolution terrain model for each flight that Operation IceBridge has flown during its eight year history of Arctic and Antarctic flights. There are some existing terrain models in the NSIDC repository that cover 2011 and 2012 (out of the total period of 2009 to 2017), which were made with the Agisoft Photoscan commercial software. Our open-source stereo suite has been verified to create terrains of similar quality. The total number of images we expect to process is around 5 million. There are numerous challenges with these data: accurate determination and refinement of camera pose when the images were acquired based on data logged during the flights and/or using information from existing orthoimages, aligning terrains with little or no features, images containing clouds, JPEG artifacts in input imagery, inconsistencies in how data was acquired/archived over the entire period, not fully reliable camera calibration files, and the sheer amount of data. We will create the majority of terrain models at 40 cm/pixel with a vertical precision of 10 to 20 cm. In some circumstances when the aircraft was flying higher than usual, those values will get coarser. We will create orthoimages at 10 cm/pixel (with the same caveat that some flights are at higher altitudes). These will differ from existing orthoimages by using the underlying terrain we generate rather than some pre-existing very low-resolution terrain model that may differ significantly from what is on the ground at the time of IceBridge acquisition.The results of this massive processing will be submitted to the NSIDC so that cryosphere researchers will be able to use these data for their investigations.

  11. Comparison of digital and film stereo photography of the optic nerve in the evaluation of patients with glaucoma.

    Science.gov (United States)

    Khouri, Albert S; Szirth, Bernard; Realini, Tony; Fechtner, Robert D

    2006-12-01

    The aim of this study was to validate a digital simultaneous stereo photography system against film in the assessment of optic nerve head features in patients with glaucoma. Fifteen digital and 15 corresponding film simultaneous stereo photographs (SSP) of the optic nerve from patients with glaucoma were graded by two glaucoma specialists. Assessed parameters included the vertical and horizontal cup-to-disc ratios (VCD and HCD, respectively), and the image quality score (1 = worse, 5 = best) for each image. Digital and film SSP were presented in random order, two times to each grader. A total of 60 evaluations (30 digital and 30 film) per grader were collected. A Nidek 3-Dx simultaneous stereo disc camera (Gamagori, Japan) was used with both a standard 35-mm-film camera back, and with a 6.1 mega pixel camera (Nikon D1x, Tokyo, Japan) for capture of digital images. All digital images were stored on a computer and reviewed using the Navis Screener software (proprietary software from Nidek). Digital image pairs were evaluated directly on an ADVAN 27-inch Liquid Crystal Display computer monitor (Taipei, Taiwan) with resolution comparable to that of the digital camera, using the screen-vu stereo viewer held at a fixed angle to the monitor. Film image pairs were evaluated using a Pentax stereo slide viewer (Asahi Optical Co., Tokyo, Japan) illuminated by a light box over a neutral density filter to match the luminance between the computer screen and the light box. The mean difference between digital and film was near zero for all three evaluated outcomes (VCD, HCD, and quality score), and there was no significant grader effect for any of the outcomes. Digital images correlated well with film for SSP of the optic nerve in glaucoma.

  12. Single Sensor Gait Analysis to Detect Diabetic Peripheral Neuropathy: A Proof of Principle Study

    Directory of Open Access Journals (Sweden)

    Patrick Esser

    2018-01-01

    Full Text Available This study explored the potential utility of gait analysis using a single sensor unit (inertial measurement unit [IMU] as a simple tool to detect peripheral neuropathy in people with diabetes. Seventeen people (14 men aged 63±9 years (mean±SD with diabetic peripheral neuropathy performed a 10-m walk test instrumented with an IMU on the lower back. Compared to a reference healthy control data set (matched by gender, age, and body mass index both spatiotemporal and gait control variables were different between groups, with walking speed, step time, and SDa (gait control parameter demonstrating good discriminatory power (receiver operating characteristic area under the curve >0.8. These results provide a proof of principle of this relatively simple approach which, when applied in clinical practice, can detect a signal from those with known diabetes peripheral neuropathy. The technology has the potential to be used both routinely in the clinic and for tele-health applications. Further research should focus on investigating its efficacy as an early indicator of or effectiveness of the management of peripheral neuropathy. This could support the development of interventions to prevent complications such as foot ulceration or Charcot's foot.

  13. Single Sensor Gait Analysis to Detect Diabetic Peripheral Neuropathy: A Proof of Principle Study.

    Science.gov (United States)

    Esser, Patrick; Collett, Johnny; Maynard, Kevin; Steins, Dax; Hillier, Angela; Buckingham, Jodie; Tan, Garry D; King, Laurie; Dawes, Helen

    2018-02-01

    This study explored the potential utility of gait analysis using a single sensor unit (inertial measurement unit [IMU]) as a simple tool to detect peripheral neuropathy in people with diabetes. Seventeen people (14 men) aged 63±9 years (mean±SD) with diabetic peripheral neuropathy performed a 10-m walk test instrumented with an IMU on the lower back. Compared to a reference healthy control data set (matched by gender, age, and body mass index) both spatiotemporal and gait control variables were different between groups, with walking speed, step time, and SDa (gait control parameter) demonstrating good discriminatory power (receiver operating characteristic area under the curve >0.8). These results provide a proof of principle of this relatively simple approach which, when applied in clinical practice, can detect a signal from those with known diabetes peripheral neuropathy. The technology has the potential to be used both routinely in the clinic and for tele-health applications. Further research should focus on investigating its efficacy as an early indicator of or effectiveness of the management of peripheral neuropathy. This could support the development of interventions to prevent complications such as foot ulceration or Charcot's foot. Copyright © 2018 Korean Diabetes Association.

  14. STEREO interplanetary shocks and foreshocks

    Energy Technology Data Exchange (ETDEWEB)

    Blanco-Cano, X. [Instituto de Geofisica, UNAM, CU, Coyoacan 04510 DF (Mexico); Kajdic, P. [IRAP-University of Toulouse, CNRS, Toulouse (France); Aguilar-Rodriguez, E. [Instituto de Geofisica, UNAM, Morelia (Mexico); Russell, C. T. [ESS and IGPP, University of California, Los Angeles, 603 Charles Young Drive, Los Angeles, CA 90095 (United States); Jian, L. K. [NASA Goddard Space Flight Center, Greenbelt, MD and University of Maryland, College Park, MD (United States); Luhmann, J. G. [SSL, University of California Berkeley (United States)

    2013-06-13

    We use STEREO data to study shocks driven by stream interactions and the waves associated with them. During the years of the extended solar minimum 2007-2010, stream interaction shocks have Mach numbers between 1.1-3.8 and {theta}{sub Bn}{approx}20-86 Degree-Sign . We find a variety of waves, including whistlers and low frequency fluctuations. Upstream whistler waves may be generated at the shock and upstream ultra low frequency (ULF) waves can be driven locally by ion instabilities. The downstream wave spectra can be formed by both, locally generated perturbations, and shock transmitted waves. We find that many quasiperpendicular shocks can be accompanied by ULF wave and ion foreshocks, which is in contrast to Earth's bow shock. Fluctuations downstream of quasi-parallel shocks tend to have larger amplitudes than waves downstream of quasi-perpendicular shocks. Proton foreshocks of shocks driven by stream interactions have extensions dr {<=}0.05 AU. This is smaller than foreshock extensions for ICME driven shocks. The difference in foreshock extensions is related to the fact that ICME driven shocks are formed closer to the Sun and therefore begin to accelerate particles very early in their existence, while stream interaction shocks form at {approx}1 AU and have been producing suprathermal particles for a shorter time.

  15. Precision analysis of triangulations using forward-facing vehicle-mounted cameras for augmented reality applications

    Science.gov (United States)

    Schmid, Stephan; Fritsch, Dieter

    2017-06-01

    One crucial ingredient for augmented reality application is having or obtaining information about the environment. In this paper, we examine the case of an augmented video application for vehicle-mounted cameras facing forward. In particular, we examine the method of obtaining geometry information of the environment via stereo computation / structure from motion. A detailed analysis of the geometry of the problem is provided, in particular of the singularity in front of the vehicle. For typical scenes, we compare monocular configurations with stereo configurations subject to the packaging constraints of forward-facing cameras in consumer vehicles.

  16. Markerless Augmented Reality via Stereo Video See-Through Head-Mounted Display Device

    Directory of Open Access Journals (Sweden)

    Chung-Hung Hsieh

    2015-01-01

    Full Text Available Conventionally, the camera localization for augmented reality (AR relies on detecting a known pattern within the captured images. In this study, a markerless AR scheme has been designed based on a Stereo Video See-Through Head-Mounted Display (HMD device. The proposed markerless AR scheme can be utilized for medical applications such as training, telementoring, or preoperative explanation. Firstly, a virtual model for AR visualization is aligned to the target in physical space by an improved Iterative Closest Point (ICP based surface registration algorithm, with the target surface structure reconstructed by a stereo camera pair; then, a markerless AR camera localization method is designed based on the Kanade-Lucas-Tomasi (KLT feature tracking algorithm and the Random Sample Consensus (RANSAC correction algorithm. Our AR camera localization method is shown to be better than the traditional marker-based and sensor-based AR environment. The demonstration system was evaluated with a plastic dummy head and the display result is satisfactory for a multiple-view observation.

  17. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    Directory of Open Access Journals (Sweden)

    Liang Lu

    2018-03-01

    Full Text Available Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.

  18. DEPTH CAMERAS ON UAVs: A FIRST APPROACH

    Directory of Open Access Journals (Sweden)

    A. Deris

    2017-02-01

    Full Text Available Accurate depth information retrieval of a scene is a field under investigation in the research areas of photogrammetry, computer vision and robotics. Various technologies, active, as well as passive, are used to serve this purpose such as laser scanning, photogrammetry and depth sensors, with the latter being a promising innovative approach for fast and accurate 3D object reconstruction using a broad variety of measuring principles including stereo vision, infrared light or laser beams. In this study we investigate the use of the newly designed Stereolab's ZED depth camera based on passive stereo depth calculation, mounted on an Unmanned Aerial Vehicle with an ad-hoc setup, specially designed for outdoor scene applications. Towards this direction, the results of its depth calculations and scene reconstruction generated by Simultaneous Localization and Mapping (SLAM algorithms are compared and evaluated based on qualitative and quantitative criteria with respect to the ones derived by a typical Structure from Motion (SfM and Multiple View Stereo (MVS pipeline for a challenging cultural heritage application.

  19. Evaluation of accelerometer based multi-sensor versus single-sensor activity recognition systems.

    Science.gov (United States)

    Gao, Lei; Bourke, A K; Nelson, John

    2014-06-01

    Physical activity has a positive impact on people's well-being and it had been shown to decrease the occurrence of chronic diseases in the older adult population. To date, a substantial amount of research studies exist, which focus on activity recognition using inertial sensors. Many of these studies adopt a single sensor approach and focus on proposing novel features combined with complex classifiers to improve the overall recognition accuracy. In addition, the implementation of the advanced feature extraction algorithms and the complex classifiers exceed the computing ability of most current wearable sensor platforms. This paper proposes a method to adopt multiple sensors on distributed body locations to overcome this problem. The objective of the proposed system is to achieve higher recognition accuracy with "light-weight" signal processing algorithms, which run on a distributed computing based sensor system comprised of computationally efficient nodes. For analysing and evaluating the multi-sensor system, eight subjects were recruited to perform eight normal scripted activities in different life scenarios, each repeated three times. Thus a total of 192 activities were recorded resulting in 864 separate annotated activity states. The methods for designing such a multi-sensor system required consideration of the following: signal pre-processing algorithms, sampling rate, feature selection and classifier selection. Each has been investigated and the most appropriate approach is selected to achieve a trade-off between recognition accuracy and computing execution time. A comparison of six different systems, which employ single or multiple sensors, is presented. The experimental results illustrate that the proposed multi-sensor system can achieve an overall recognition accuracy of 96.4% by adopting the mean and variance features, using the Decision Tree classifier. The results demonstrate that elaborate classifiers and feature sets are not required to achieve high

  20. A probabilistic framework for single-sensor acoustic emission source localization in thin metallic plates

    Science.gov (United States)

    Ebrahimkhanlou, Arvin; Salamone, Salvatore

    2017-09-01

    Tracking edge-reflected acoustic emission (AE) waves can allow the localization of their sources. Specifically, in bounded isotropic plate structures, only one sensor may be used to perform these source localizations. The primary goal of this paper is to develop a three-step probabilistic framework to quantify the uncertainties associated with such single-sensor localizations. According to this framework, a probabilistic approach is first used to estimate the direct distances between AE sources and the sensor. Then, an analytical model is used to reconstruct the envelope of edge-reflected AE signals based on the source-to-sensor distance estimations and their first arrivals. Finally, the correlation between the probabilistically reconstructed envelopes and recorded AE signals are used to estimate confidence contours for the location of AE sources. To validate the proposed framework, Hsu-Nielsen pencil lead break (PLB) tests were performed on the surface as well as the edges of an aluminum plate. The localization results show that the estimated confidence contours surround the actual source locations. In addition, the performance of the framework was tested in a noisy environment simulated by two dummy transducers and an arbitrary wave generator. The results show that in low-noise environments, the shape and size of the confidence contours depend on the sources and their locations. However, at highly noisy environments, the size of the confidence contours monotonically increases with the noise floor. Such probabilistic results suggest that the proposed probabilistic framework could thus provide more comprehensive information regarding the location of AE sources.

  1. Effects of illumination differences on photometric stereo shape-and-albedo-from-shading for precision lunar surface reconstruction

    Science.gov (United States)

    Chung Liu, Wai; Wu, Bo; Wöhler, Christian

    2018-02-01

    Photoclinometric surface reconstruction techniques such as Shape-from-Shading (SfS) and Shape-and-Albedo-from-Shading (SAfS) retrieve topographic information of a surface on the basis of the reflectance information embedded in the image intensity of each pixel. SfS or SAfS techniques have been utilized to generate pixel-resolution digital elevation models (DEMs) of the Moon and other planetary bodies. Photometric stereo SAfS analyzes images under multiple illumination conditions to improve the robustness of reconstruction. In this case, the directional difference in illumination between the images is likely to affect the quality of the reconstruction result. In this study, we quantitatively investigate the effects of illumination differences on photometric stereo SAfS. Firstly, an algorithm for photometric stereo SAfS is developed, and then, an error model is derived to analyze the relationships between the azimuthal and zenith angles of illumination of the images and the reconstruction qualities. The developed algorithm and error model were verified with high-resolution images collected by the Narrow Angle Camera (NAC) of the Lunar Reconnaissance Orbiter Camera (LROC). Experimental analyses reveal that (1) the resulting error in photometric stereo SAfS depends on both the azimuthal and the zenith angles of illumination as well as the general intensity of the images and (2) the predictions from the proposed error model are consistent with the actual slope errors obtained by photometric stereo SAfS using the LROC NAC images. The proposed error model enriches the theory of photometric stereo SAfS and is of significance for optimized lunar surface reconstruction based on SAfS techniques.

  2. China's First Civilian Three-line-array Stereo Mapping Satellite: ZY-3

    Directory of Open Access Journals (Sweden)

    LI Deren

    2016-02-01

    Full Text Available On January 9th,2012,China launched its first civilian three-line-array stereo mapping satellite——ZY-3.ZY-3 is equipped with 2 front and back view TDI CCD cameras having the resolution better than 3.5 m and the width better than 50 km,1 TDI CCD camera with the resolution better than 2.1 m and the width better than 50 km and 1 multispectral camera with the resolution better than 5.8 m.In order to ensure accuracy and reliability,ZY-3 adopts a large platform which is equipped with double-frequency GPS and more gyroes.ZY-3 obtains its geolocation accuracy better than 15 m without GCPs,geolocation accuracy better than 3 m and plane geolocation accuracy better than 4 m with GCPs which completely satisfies 1∶50 000 mapping precision.

  3. Research on dimensional measurement method of mechanical parts based on stereo vision

    Science.gov (United States)

    Zhou, Zhuoyun; Zhang, Xuewu; Shen, Haodong; Zhang, Zhuo; Fan, Xinnan

    2015-10-01

    This paper researches on the key and difficult issues in stereo measurement deeply, including camera calibration, feature extraction, stereo matching and depth computation, and then put forwards a novel matching method combined the seed region growing and SIFT feature matching. It first uses SIFT characteristics as matching criteria for feature points matching, and then takes the feature points as seed points for region growing to get better depth information. Experiments are conducted to validate the efficiency of the proposed method using standard matching graphs, and then the proposed method is applied to dimensional measurement of mechanical parts. The results show that the measurement error is less than 0.5mm for medium sized mechanical parts, which can meet the demands of precision measurement.

  4. Cultural heritage omni-stereo panoramas for immersive cultural analytics - From the Nile to the Hijaz

    KAUST Repository

    Smith, Neil

    2013-09-01

    The digital imaging acquisition and visualization techniques described here provides a hyper-realistic stereoscopic spherical capture of cultural heritage sites. An automated dual-camera system is used to capture sufficient stereo digital images to cover a sphere or cylinder. The resulting stereo images are projected undistorted in VR systems providing an immersive virtual environment in which researchers can collaboratively study the important textural details of an excavation or historical site. This imaging technique complements existing technologies such as LiDAR or SfM providing more detailed textural information that can be used in conjunction for analysis and visualization. The advantages of this digital imaging technique for cultural heritage can be seen in its non-invasive and rapid capture of heritage sites for documentation, analysis, and immersive visualization. The technique is applied to several significant heritage sites in Luxor, Egypt and Saudi Arabia.

  5. The potential risk of personal stereo players

    DEFF Research Database (Denmark)

    Hammershøi, Dorte; Ordoñez, Rodrigo Pizarro; Reuter, Karen

    2010-01-01

    The technological development within personal stereo systems, such as MP3 players, e. g. iPods, has changed music listening habits from home entertainment to everyday and everywhere use. The technology has developed considerably, since the introduction of cassette players and CD walkmen. High......-level low-distortion music is produced by minimal devices which can play for long periods. In this paper, the existing literature on effects of personal stereo systems is reviewed, incl. studies of exposure levels, and effects on hearing. Generally, it is found that the levels being used are of concern......, which in one study is demonstrated to relate to the specific use in situations with high levels of background noise. Another study demonstrates that the effect of using personal stereo is comparable to that of being exposed to noise in industry. The results are discussed in view of the measurement...

  6. The potential risk of personal stereo players

    DEFF Research Database (Denmark)

    Hammershøi, Dorte; Ordoñez, Rodrigo Pizarro; Reuter, Karen

    2010-01-01

    The technological development within personal stereo systems,such as MP3 players, e. g. iPods, has changed music listening habits from home entertainment to everyday and everywhere use. The technology has developed considerably, since the introduction of cassette players and CD walkmen. High......-level low-distortion music is produced by minimal devices which can play for long periods. In this paper, the existing literature on effects of personal stereo systems is reviewed, incl. studies of exposure levels, and effects on hearing. Generally, it is found that the levels being used are of concern......, which in one study is demonstrated to relate to the specific use in situations with high levels of background noise. Another study demonstrates that the effect of using personal stereo is comparable to that of being exposed to noise in industry. The results are discussed in view of the measurement...

  7. Restoration of degraded images using stereo vision

    Science.gov (United States)

    Hernández-Beltrán, José Enrique; Díaz-Ramírez, Victor H.; Juarez-Salazar, Rigoberto

    2017-08-01

    Image restoration consists in retrieving an original image by processing captured images of a scene which are degraded by noise, blurring or optical scattering. Commonly restoration algorithms utilize a single monocular image of the observed scene by assuming a known degradation model. In this approach, valuable information of the three dimensional scene is discarded. This work presents a locally-adaptive algorithm for image restoration by employing stereo vision. The proposed algorithm utilizes information of a three-dimensional scene as well as local image statistics to improve the quality of a single restored image by processing pairs of stereo images. Computer simulations results obtained with the proposed algorithm are analyzed and discussed in terms of objective metrics by processing stereo images degraded by optical scattering.

  8. INTEGRATED GEOREFERENCING OF STEREO IMAGE SEQUENCES CAPTURED WITH A STEREOVISION MOBILE MAPPING SYSTEM – APPROACHES AND PRACTICAL RESULTS

    Directory of Open Access Journals (Sweden)

    H. Eugster

    2012-07-01

    Full Text Available Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations – in our case of the imaging sensors – normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.

  9. Integrated Georeferencing of Stereo Image Sequences Captured with a Stereovision Mobile Mapping System - Approaches and Practical Results

    Science.gov (United States)

    Eugster, H.; Huber, F.; Nebiker, S.; Gisi, A.

    2012-07-01

    Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations - in our case of the imaging sensors - normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.

  10. A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors.

    Science.gov (United States)

    Song, Yu; Nuske, Stephen; Scherer, Sebastian

    2016-12-22

    State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.

  11. Researches on hazard avoidance cameras calibration of Lunar Rover

    Science.gov (United States)

    Li, Chunyan; Wang, Li; Lu, Xin; Chen, Jihua; Fan, Shenghong

    2017-11-01

    Lunar Lander and Rover of China will be launched in 2013. It will finish the mission targets of lunar soft landing and patrol exploration. Lunar Rover has forward facing stereo camera pair (Hazcams) for hazard avoidance. Hazcams calibration is essential for stereo vision. The Hazcam optics are f-theta fish-eye lenses with a 120°×120° horizontal/vertical field of view (FOV) and a 170° diagonal FOV. They introduce significant distortion in images and the acquired images are quite warped, which makes conventional camera calibration algorithms no longer work well. A photogrammetric calibration method of geometric model for the type of optical fish-eye constructions is investigated in this paper. In the method, Hazcams model is represented by collinearity equations with interior orientation and exterior orientation parameters [1] [2]. For high-precision applications, the accurate calibration model is formulated with the radial symmetric distortion and the decentering distortion as well as parameters to model affinity and shear based on the fisheye deformation model [3] [4]. The proposed method has been applied to the stereo camera calibration system for Lunar Rover.

  12. Miniature photometric stereo system for textile surface structure reconstruction

    Science.gov (United States)

    Gorpas, Dimitris; Kampouris, Christos; Malassiotis, Sotiris

    2013-04-01

    In this work a miniature photometric stereo system is presented, targeting the three-dimensional structural reconstruction of various fabric types. This is a supportive module to a robot system, attempting to solve the well known "laundry problem". The miniature device has been designed for mounting onto the robot gripper. It is composed of a low-cost off-the-shelf camera, operating in macro mode, and eight light emitting diodes. The synchronization between image acquisition and lighting direction is controlled by an Arduino Nano board and software triggering. The ambient light has been addressed by a cylindrical enclosure. The direction of illumination is recovered by locating the reflection or the brightest point on a mirror sphere, while a flatfielding process compensates for the non-uniform illumination. For the evaluation of this prototype, the classical photometric stereo methodology has been used. The preliminary results on a large number of textiles are very promising for the successful integration of the miniature module to the robot system. The required interaction with the robot is implemented through the estimation of the Brenner's focus measure. This metric successfully assesses the focus quality with reduced time requirements in comparison to other well accepted focus metrics. Besides the targeting application, the small size of the developed system makes it a very promising candidate for applications with space restrictions, like the quality control in industrial production lines or object recognition based on structural information and in applications where easiness in operation and light-weight are required, like those in the Biomedical field, and especially in dermatology.

  13. Indoor calibration for stereoscopic camera STC: a new method

    Science.gov (United States)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2017-11-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir

  14. Systematic Errors in Stereo PIV When Imaging through a Glass Window

    Science.gov (United States)

    Green, Richard; McAlister, Kenneth W.

    2004-01-01

    This document assesses the magnitude of velocity measurement errors that may arise when performing stereo particle image velocimetry (PIV) with cameras viewing through thick, refractive window and where the calibration is performed in one plane only. The effect of the window is to introduce a refractive error that increases with window thickness and the camera angle of incidence. The calibration should be performed while viewing through the test section window, otherwise a potentially significant error may be introduced that affects each velocity component differently. However, even when the calibration is performed correctly, another error may arise during the stereo reconstruction if the perspective angle determined for each camera does not account for the displacement of the light rays as they refract through the thick window. Care should be exercised when applying in a single-plane calibration since certain implicit assumptions may in fact require conditions that are extremely difficult to meet in a practical laboratory environment. It is suggested that the effort expended to ensure this accuracy may be better expended in performing a more lengthy volumetric calibration procedure, which does not rely upon the assumptions implicit in the single plane method and avoids the need for the perspective angle to be calculated.

  15. Stereoselectivity in metallocene-catalyzed coordination polymerization of renewable methylene butyrolactones: From stereo-random to stereo-perfect polymers

    KAUST Repository

    Chen, Xia

    2012-05-02

    Coordination polymerization of renewable α-methylene-γ-(methyl) butyrolactones by chiral C 2-symmetric zirconocene catalysts produces stereo-random, highly stereo-regular, or perfectly stereo-regular polymers, depending on the monomer and catalyst structures. Computational studies yield a fundamental understanding of the stereocontrol mechanism governing these new polymerization reactions mediated by chiral metallocenium catalysts. © 2012 American Chemical Society.

  16. Stereoselectivity in metallocene-catalyzed coordination polymerization of renewable methylene butyrolactones: from stereo-random to stereo-perfect polymers.

    Science.gov (United States)

    Chen, Xia; Caporaso, Lucia; Cavallo, Luigi; Chen, Eugene Y-X

    2012-05-02

    Coordination polymerization of renewable α-methylene-γ-(methyl)butyrolactones by chiral C(2)-symmetric zirconocene catalysts produces stereo-random, highly stereo-regular, or perfectly stereo-regular polymers, depending on the monomer and catalyst structures. Computational studies yield a fundamental understanding of the stereocontrol mechanism governing these new polymerization reactions mediated by chiral metallocenium catalysts. © 2012 American Chemical Society

  17. Head Pose Estimation from Passive Stereo Images

    DEFF Research Database (Denmark)

    Breitenstein, Michael D.; Jensen, Jeppe; Høilund, Carsten

    2009-01-01

    function. Our algorithm incorporates 2D and 3D cues to make the system robust to low-quality range images acquired by passive stereo systems. It handles large pose variations (of ±90 ° yaw and ±45 ° pitch rotation) and facial variations due to expressions or accessories. For a maximally allowed error of 30...

  18. Artistic Stereo Imaging by Edge Preserving Smoothing

    NARCIS (Netherlands)

    Papari, Giuseppe; Campisi, Patrizio; Callet, Patrick Le; Petkov, Nicolai

    2009-01-01

    Stereo imaging is an important area of image and video processing, with exploding progress in the last decades. An open issue in this field is the understanding of the conditions under which the straightforward application of a given image processing operator to both the left and right image of a

  19. Local Stereo Matching Using Adaptive Local Segmentation

    NARCIS (Netherlands)

    Damjanovic, S.; van der Heijden, Ferdinand; Spreeuwers, Lieuwe Jan

    We propose a new dense local stereo matching framework for gray-level images based on an adaptive local segmentation using a dynamic threshold. We define a new validity domain of the fronto-parallel assumption based on the local intensity variations in the 4-neighborhood of the matching pixel. The

  20. Development of 3D Image Measurement System and Stereo-matching Method, and Its Archeological Measurement

    Science.gov (United States)

    Kochi, Nobuo; Ito, Tadayuki; Kitamura, Kazuo; Kaneko, Syun'ichi

    The three dimensional measurement & modeling system with digital cameras on PC is now making progress and its need and hope is increasingly felt in terrestrial (close-range) photogrammetry for such sectors as cultural heritage preservation, architecture, civil engineering, manufacturing, measurement etc. Therefore, we have developed a system to improve the accuracy of stereo-matching, which is the very core of 3D measurement. As for stereo-matching method, in order to minimize the mismatching and to be robust in geometric distortions, occlusion, as well as brightness change, we invented Coarse-to-Fine Strategy Method by integrating OCM (Orientation Code Matching) with LSM (Least Squares Matching). Thus this system could attain the accuracy of 0.26mm, when we experimented on a mannequin. And when we actually experimented on the archeological ruins in Greece and Turkey, the accuracy was within the range of 1cm, compared with their blue-print plan. Besides, formally workers used to take at least 1.5 month for this kind of survey operation with the existing method, but now workers need only 3 or 4 days. Thus, its practicality and efficiency was confirmed. This paper demonstrates our new system of 3D measurement and stereo-matching with some concrete examples as its practical application.

  1. A Framework for Obstacles Avoidance of Humanoid Robot Using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2013-04-01

    Full Text Available In this paper, we propose a framework for multiple moving obstacles avoidance strategy using stereo vision for humanoid robot in indoor environment. We assume that this model of humanoid robot is used as a service robot to deliver a cup to customer from starting point to destination point. We have successfully developed and introduced three main modules to recognize faces, to identify multiple moving obstacles and to initiate a maneuver. A group of people who are walking will be tracked as multiple moving obstacles. Predefined maneuver to avoid obstacles is applied to robot because the limitation of view angle from stereo camera to detect multiple obstacles. The contribution of this research is a new method for multiple moving obstacles avoidance strategy with Bayesian approach using stereo vision based on the direction and speed of obstacles. Depth estimation is used to obtain distance calculation between obstacles and the robot. We present the results of the experiment of the humanoid robot called Gatotkoco II which is used our proposed method and evaluate its performance. The proposed moving obstacles avoidance strategy was tested empirically and proved effective for humanoid robot.

  2. Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.

    Science.gov (United States)

    Rumei Zhang; Hao Liu; Jianda Han

    2017-07-01

    Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.

  3. REAL-TIME CAMERA GUIDANCE FOR 3D SCENE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    F. Schindler

    2012-07-01

    Full Text Available We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  4. The contribution of stereo vision to the control of braking.

    Science.gov (United States)

    Tijtgat, Pieter; Mazyn, Liesbeth; De Laey, Christophe; Lenoir, Matthieu

    2008-03-01

    In this study the contribution of stereo vision to the control of braking in front of a stationary target vehicle was investigated. Participants with normal (StereoN) and weak (StereoW) stereo vision drove a go-cart along a linear track towards a stationary vehicle. They could start braking from a distance of 4, 7, or 10m from the vehicle. Deceleration patterns were measured by means of a laser. A lack of stereo vision was associated with an earlier onset of braking, but the duration of the braking manoeuvre was similar. During the deceleration, the time of peak deceleration occurred earlier in drivers with weak stereo vision. Stopping distance was greater in those lacking in stereo vision. A lack of stereo vision was associated with a more prudent brake behaviour, in which the driver took into account a larger safety margin. This compensation might be caused either by an unconscious adaptation of the human perceptuo-motor system, or by a systematic underestimation of distance remaining due to the lack of stereo vision. In general, a lack of stereo vision did not seem to increase the risk of rear-end collisions.

  5. Improved SIFT descriptor applied to stereo image matching

    Science.gov (United States)

    Zeng, Luan; Zhai, You; Xiong, Wei

    2015-02-01

    Scale Invariant Feature Transform (SIFT) has been proven to perform better on the distinctiveness and robustness than other features. But it cannot satisfy the needs of low contrast images matching and the matching results are sensitive to 3D viewpoint change of camera. In order to improve the performance of SIFT to low contrast images and images with large 3D viewpoint change, a new matching method based on improved SIFT is proposed. First, an adaptive contrast threshold is computed for each initial key point in low contrast image region, which uses pixels in its 9×9 local neighborhood, and then using it to eliminate initial key points in low contrast image region. Second, a new SIFT descriptor with 48 dimensions is computed for each key point. Third, a hierarchical matching method based on epipolar line and differences of key points' dominate orientation is presented. The experimental results prove that the method can greatly enhance the performance of SIFT to low contrast image matching. Besides, when applying it to stereo images matching with the hierarchical matching method, the correct matches and matching efficiency are greatly enhanced.

  6. Merged Shape from Shading and Shape from Stereo for Planetary Topographic Mapping

    Science.gov (United States)

    Tyler, Laurence; Cook, Tony; Barnes, Dave; Parr, Gerhard; Kirk, Randolph

    2014-05-01

    Digital Elevation Models (DEMs) of the Moon and Mars have traditionally been produced from stereo imagery from orbit, or from the surface landers or rovers. One core component of image-based DEM generation is stereo matching to find correspondences between images taken from different viewpoints. Stereo matchers that rely mostly on textural features in the images can fail to find enough matched points in areas lacking in contrast or surface texture. This can lead to blank or topographically noisy areas in resulting DEMs. Fine depth detail may also be lacking due to limited precision and quantisation of the pixel matching process. Shape from shading (SFS), a two dimensional version of photoclinometry, utilizes the properties of light reflecting off surfaces to build up localised slope maps, which can subsequently be combined to extract topography. This works especially well on homogeneous surfaces and can recover fine detail. However the cartographic accuracy can be affected by changes in brightness due to differences in surface material, albedo and light scattering properties, and also by the presence of shadows. We describe here experimental research for the Planetary Robotics Vision Data Exploitation EU FP7 project (PRoViDE) into using stereo generated depth maps in conjunction with SFS to recover both coarse and fine detail of planetary surface DEMs. Our Large Deformation Optimisation Shape From Shading (LDOSFS) algorithm uses image data, illumination, viewing geometry and camera parameters to produce a DEM. A stereo-derived depth map can be used as an initial seed if available. The software uses separate Bidirectional Reflectance Distribution Function (BRDF) and SFS modules for iterative processing and to make the code more portable for future development. Three BRDF models are currently implemented: Lambertian, Blinn-Phong, and Oren-Nayar. A version of the Hapke reflectance function, which is more appropriate for planetary surfaces, is under development

  7. Massive stereo-based DTM production for Mars on cloud computers

    Science.gov (United States)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Xiong, Si-Ting; Putri, A. R. D.; Walter, S. H. G.; Veitch-Michaelis, J.; Yershov, V.

    2018-05-01

    Digital Terrain Model (DTM) creation is essential to improving our understanding of the formation processes of the Martian surface. Although there have been previous demonstrations of open-source or commercial planetary 3D reconstruction software, planetary scientists are still struggling with creating good quality DTMs that meet their science needs, especially when there is a requirement to produce a large number of high quality DTMs using "free" software. In this paper, we describe a new open source system to overcome many of these obstacles by demonstrating results in the context of issues found from experience with several planetary DTM pipelines. We introduce a new fully automated multi-resolution DTM processing chain for NASA Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) and High Resolution Imaging Science Experiment (HiRISE) stereo processing, called the Co-registration Ames Stereo Pipeline (ASP) Gotcha Optimised (CASP-GO), based on the open source NASA ASP. CASP-GO employs tie-point based multi-resolution image co-registration, and Gotcha sub-pixel refinement and densification. CASP-GO pipeline is used to produce planet-wide CTX and HiRISE DTMs that guarantee global geo-referencing compliance with respect to High Resolution Stereo Colour imaging (HRSC), and thence to the Mars Orbiter Laser Altimeter (MOLA); providing refined stereo matching completeness and accuracy. All software and good quality products introduced in this paper are being made open-source to the planetary science community through collaboration with NASA Ames, United States Geological Survey (USGS) and the Jet Propulsion Laboratory (JPL), Advanced Multi-Mission Operations System (AMMOS) Planetary Data System (PDS) Pipeline Service (APPS-PDS4), as well as browseable and visualisable through the iMars web based Geographic Information System (webGIS) system.

  8. CMOS detectors: lessons learned during the STC stereo channel preflight calibration

    Science.gov (United States)

    Simioni, E.; De Sio, A.; Da Deppo, V.; Naletto, G.; Cremonese, G.

    2017-09-01

    The Stereo Camera (STC), mounted on-board the BepiColombo spacecraft, will acquire in push frame stereo mode the entire surface of Mercury. STC will provide the images for the global three-dimensional reconstruction of the surface of the innermost planet of the Solar System. The launch of BepiColombo is foreseen in 2018. STC has an innovative optical system configuration, which allows good optical performances with a mass and volume reduction of a factor two with respect to classical stereo camera approach. In such a telescope, two different optical paths inclined of +/-20°, with respect to the nadir direction, are merged together in a unique off axis path and focused on a single detector. The focal plane is equipped with a 2k x 2k hybrid Si-PIN detector, based on CMOS technology, combining low read-out noise, high radiation hardness, compactness, lack of parasitic light, capability of snapshot image acquisition and short exposure times (less than 1 ms) and small pixel size (10 μm). During the preflight calibration campaign of STC, some detector spurious effects have been noticed. Analyzing the images taken during the calibration phase, two different signals affecting the background level have been measured. These signals can reduce the detector dynamics down to a factor of 1/4th and they are not due to dark current, stray light or similar effects. In this work we will describe all the features of these unwilled effects, and the calibration procedures we developed to analyze them.

  9. Quantitative Evaluation of Stereo Visual Odometry for Autonomous Vessel Localisation in Inland Waterway Sensing Applications

    Directory of Open Access Journals (Sweden)

    Thomas Kriechbaumer

    2015-12-01

    Full Text Available Autonomous survey vessels can increase the efficiency and availability of wide-area river environment surveying as a tool for environment protection and conservation. A key challenge is the accurate localisation of the vessel, where bank-side vegetation or urban settlement preclude the conventional use of line-of-sight global navigation satellite systems (GNSS. In this paper, we evaluate unaided visual odometry, via an on-board stereo camera rig attached to the survey vessel, as a novel, low-cost localisation strategy. Feature-based and appearance-based visual odometry algorithms are implemented on a six degrees of freedom platform operating under guided motion, but stochastic variation in yaw, pitch and roll. Evaluation is based on a 663 m-long trajectory (>15,000 image frames and statistical error analysis against ground truth position from a target tracking tachymeter integrating electronic distance and angular measurements. The position error of the feature-based technique (mean of ±0.067 m is three times smaller than that of the appearance-based algorithm. From multi-variable statistical regression, we are able to attribute this error to the depth of tracked features from the camera in the scene and variations in platform yaw. Our findings inform effective strategies to enhance stereo visual localisation for the specific application of river monitoring.

  10. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments.

    Science.gov (United States)

    Ramon Soria, Pablo; Arrue, Begoña C; Ollero, Anibal

    2017-01-07

    The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors.

  11. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Pablo Ramon Soria

    2017-01-01

    Full Text Available The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors.

  12. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments

    Science.gov (United States)

    Ramon Soria, Pablo; Arrue, Begoña C.; Ollero, Anibal

    2017-01-01

    The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors. PMID:28067851

  13. Voice Controlled Stereographic Video Camera System

    Science.gov (United States)

    Goode, Georgianna D.; Philips, Michael L.

    1989-09-01

    For several years various companies have been developing voice recognition software. Yet, there are few applications of voice control in the robotics field and virtually no examples of voice controlled three dimensional (3-D) systems. In late 1987 ARD developed a highly specialized, voice controlled 3-D vision system for use in remotely controlled, non-tethered robotic applications. The system was designed as an operator's aid and incorporates features thought to be necessary or helpful in remotely maneuvering a vehicle. Foremost is the three dimensionality of the operator's console display. An image that provides normal depth perception cues over a range of depths greatly increases the ease with which an operator can drive a vehicle and investigate its environment. The availability of both vocal and manual control of all system functions allows the operator to guide the system according to his personal preferences. The camera platform can be panned +/-178 degrees and tilted +/-30 degrees for a full range of view of the vehicle's environment. The cameras can be zoomed and focused for close inspection of distant objects, while retaining substantial stereo effect by increasing the separation between the cameras. There is a ranging and measurement function, implemented through a graphical cursor, which allows the operator to mark objects in a scene to determine their relative positions. This feature will be helpful in plotting a driving path. The image seen on the screen is overlaid with icons and digital readouts which provide information about the position of the camera platform, the range to the graphical cursor and the measurement results. The cursor's "range" is actually the distance from the cameras to the object on which the cursor is resting. Other such features are included in the system and described in subsequent sections of this paper.

  14. Those Nifty Digital Cameras!

    Science.gov (United States)

    Ekhaml, Leticia

    1996-01-01

    Describes digital photography--an electronic imaging technology that merges computer capabilities with traditional photography--and its uses in education. Discusses how a filmless camera works, types of filmless cameras, advantages and disadvantages, and educational applications of the consumer digital cameras. (AEF)

  15. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen...

  16. A Technique for Binocular Stereo Vision System Calibration by the Nonlinear Optimization and Calibration Points with Accurate Coordinates

    International Nuclear Information System (INIS)

    Chen, H; Ye, D; Che, R S; Chen, G

    2006-01-01

    With the increasing need for higher accuracy measurement in computer vision, the precision of camera calibration is a more important factor. The objective of stereo camera calibration is to estimate the intrinsic and extrinsic parameters of each camera. We presented a high-accurate technique to calibrate binocular stereo vision system having been mounted the locations and attitudes, which was realized by combining nonlinear optimization method with accurate calibration points. The calibration points with accurate coordinates, were formed by an infrared LED moved with three-dimensional coordinate measurement machine, which can ensure indeterminacy of measurement is 1/30000. By using bilinear interpolation square-gray weighted centroid location algorithm, the imaging centers of the calibration points can be accurately determined. The accuracy of the calibration is measured in terms of the accuracy in the reconstructing calibration points through triangulation, the mean distance between reconstructing point and given calibration point is 0.039mm. The technique can satisfy the goals of measurement and camera accurate calibration

  17. CHAMP (Camera, Handlens, and Microscope Probe)

    Science.gov (United States)

    Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.

    2005-01-01

    CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.

  18. CHAMP - Camera, Handlens, and Microscope Probe

    Science.gov (United States)

    Mungas, G. S.; Beegle, L. W.; Boynton, J.; Sepulveda, C. A.; Balzer, M. A.; Sobel, H. R.; Fisher, T. A.; Deans, M.; Lee, P.

    2005-01-01

    CHAMP (Camera, Handlens And Microscope Probe) is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As an arm-mounted imager, CHAMP supports stereo-imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision range-finding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. Currently designed with a filter wheel with 4 different filters, so that color and black and white images can be obtained over the entire Field-of-View, future designs will increase the number of filter positions to include 8 different filters. Finally, CHAMP incorporates controlled white and UV illumination so that images can be obtained regardless of sun position, and any potential fluorescent species can be identified so the most astrobiologically interesting samples can be identified.

  19. 'McMurdo' Panorama from Spirit's 'Winter Haven' (Color Stereo)

    Science.gov (United States)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA01905 [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA01905 This 360-degree view, called the 'McMurdo' panorama, comes from the panoramic camera (Pancam) on NASA's Mars Exploration Rover Spirit. From April through October 2006, Spirit has stayed on a small hill known as 'Low Ridge.' There, the rover's solar panels are tilted toward the sun to maintain enough solar power for Spirit to keep making scientific observations throughout the winter on southern Mars. This view of the surroundings from Spirit's 'Winter Haven' is presented as a stereo anaglyph to show the scene three-dimensionally when viewed through red-blue glasses (with the red lens on the left). Oct. 26, 2006, marks Spirit's 1,000th sol of what was planned as a 90-sol mission. (A sol is a Martian day, which lasts 24 hours, 39 minutes, 35 seconds). The rover has lived through the most challenging part of its second Martian winter. Its solar power levels are rising again. Spring in the southern hemisphere of Mars will begin in early 2007. Before that, the rover team hopes to start driving Spirit again toward scientifically interesting places in the 'Inner Basin' and 'Columbia Hills' inside Gusev crater. The McMurdo panorama is providing team members with key pieces of scientific and topographic information for choosing where to continue Spirit's exploration adventure. The Pancam began shooting component images of this panorama during Spirit's sol 814 (April 18, 2006) and completed the part shown here on sol 932 (Aug. 17, 2006). The panorama was acquired using all 13 of the Pancam's color filters, using lossless compression for the red and blue stereo filters, and only modest levels of compression on the remaining filters. The overall panorama consists of 1,449 Pancam images and represents a raw data volume of nearly 500 megabytes. It is thus the largest, highest-fidelity view of Mars

  20. Multi-hypothesis distributed stereo video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Zamarin, Marco; Forchhammer, Søren

    2013-01-01

    for stereo sequences, exploiting an interpolated intra-view SI and two inter-view SIs. The quality of the SI has a major impact on the DVC Rate-Distortion (RD) performance. As the inter-view SIs individually present lower RD performance compared with the intra-view SI, we propose multi-hypothesis decoding...... for robust fusion and improved performance. Compared with a state-of-the-art single side information solution, the proposed DVC decoder improves the RD performance for all the chosen test sequences by up to 0.8 dB. The proposed multi-hypothesis decoder showed higher robustness compared with other fusion...

  1. Developing stereo image based robot control system

    Energy Technology Data Exchange (ETDEWEB)

    Suprijadi,; Pambudi, I. R.; Woran, M.; Naa, C. F; Srigutomo, W. [Department of Physics, FMIPA, InstitutTeknologi Bandung Jl. Ganesha No. 10. Bandung 40132, Indonesia supri@fi.itb.ac.id (Indonesia)

    2015-04-16

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  2. Family Of Calibrated Stereometric Cameras For Direct Intraoral Use

    Science.gov (United States)

    Curry, Sean; Moffitt, Francis; Symes, Douglas; Baumrind, Sheldon

    1983-07-01

    In order to study empirically the relative efficiencies of different types of orthodontic appliances in repositioning teeth in vivo, we have designed and constructed a pair of fixed-focus, normal case, fully-calibrated stereometric cameras. One is used to obtain stereo photography of single teeth, at a scale of approximately 2:1, and the other is designed for stereo imaging of the entire dentition, study casts, facial structures, and other related objects at a scale of approximately 1:8. Twin lenses simultaneously expose adjacent frames on a single roll of 70 mm film. Physical flatness of the film is ensured by the use of a spring-loaded metal pressure plate. The film is forced against a 3/16" optical glass plate upon which is etched an array of 16 fiducial marks which divide the film format into 9 rectangular regions. Using this approach, it has been possible to produce photographs which are undistorted for qualitative viewing and from which quantitative data can be acquired by direct digitization of conventional photographic enlargements. We are in the process of designing additional members of this family of cameras. All calibration and data acquisition and analysis techniques previously developed will be directly applicable to these new cameras.

  3. Multi-camera synchronization core implemented on USB3 based FPGA platform

    Science.gov (United States)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  4. Object recognition with stereo vision and geometric hashing

    NARCIS (Netherlands)

    van Dijck, H.A.L.; van der Heijden, Ferdinand

    In this paper we demonstrate a method to recognize 3D objects and to estimate their pose. For that purpose we use a combination of stereo vision and geometric hashing. Stereo vision is used to generate a large number of 3D low level features, of which many are spurious because at that stage of the

  5. Real-time loudspeaker distance estimation with stereo audio

    DEFF Research Database (Denmark)

    2017-01-01

    A method for estimating a distance between a first and a second loudspeaker characterized by playing back a first stereo source signal vector s1 on the first loudspeaker, and playing back a second stereo source signal vector s2 on the second loudspeaker, acquiring a first recorded signal vector x...

  6. Global stereo matching algorithm based on disparity range estimation

    Science.gov (United States)

    Li, Jing; Zhao, Hong; Gu, Feifei

    2017-09-01

    The global stereo matching algorithms are of high accuracy for the estimation of disparity map, but the time-consuming in the optimization process still faces a curse, especially for the image pairs with high resolution and large baseline setting. To improve the computational efficiency of the global algorithms, a disparity range estimation scheme for the global stereo matching is proposed to estimate the disparity map of rectified stereo images in this paper. The projective geometry in a parallel binocular stereo vision is investigated to reveal a relationship between two disparities at each pixel in the rectified stereo images with different baselines, which can be used to quickly obtain a predicted disparity map in a long baseline setting estimated by that in the small one. Then, the drastically reduced disparity ranges at each pixel under a long baseline setting can be determined by the predicted disparity map. Furthermore, the disparity range estimation scheme is introduced into the graph cuts with expansion moves to estimate the precise disparity map, which can greatly save the cost of computing without loss of accuracy in the stereo matching, especially for the dense global stereo matching, compared to the traditional algorithm. Experimental results with the Middlebury stereo datasets are presented to demonstrate the validity and efficiency of the proposed algorithm.

  7. LED-based Photometric Stereo: Modeling, Calibration and Numerical Solutions

    DEFF Research Database (Denmark)

    Quéau, Yvain; Durix, Bastien; Wu, Tao

    2018-01-01

    We conduct a thorough study of photometric stereo under nearby point light source illumination, from modeling to numerical solution, through calibration. In the classical formulation of photometric stereo, the luminous fluxes are assumed to be directional, which is very difficult to achieve in pr...

  8. Investigating the Importance of Stereo Displays for Helicopter Landing Simulation

    Science.gov (United States)

    2016-08-11

    Nvidia GeForce GTX 680 graphics card was used to administer the stereo acuity and fusion range tests. The tests were displayed on an Asus VG278HE 3D...a distance of 1 m on the Asus stereo monitor resulted in double vision (i.e., binocular fusion was broken) using the game controller as the circles

  9. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study

    International Nuclear Information System (INIS)

    Suenaga, Hideyuki; Tran, Huy Hoang; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Takato, Tsuyoshi

    2015-01-01

    This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject’s anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Accurate registration of the volunteer’s anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications. The online version of this article (doi:10.1186/s12880-015-0089-5) contains supplementary material, which is available to authorized users

  10. Stereo vision for fully automatic volumetric flow measurement in urban drainage structures

    Science.gov (United States)

    Sirazitdinova, Ekaterina; Pesic, Igor; Schwehn, Patrick; Song, Hyuk; Satzger, Matthias; Weingärtner, Dorothea; Sattler, Marcus; Deserno, Thomas M.

    2017-06-01

    Overflows in urban drainage structures, or sewers, must be prevented on time to avoid their undesirable consequences. An effective monitoring system able to measure volumetric flow in sewers is needed. Existing stateof-the-art technologies are not robust against harsh sewer conditions and, therefore, cause high maintenance expenses. Having the goal of fully automatic, robust and non-contact volumetric flow measurement in sewers, we came up with an original and innovative idea of a vision-based system for volumetric flow monitoring. On the contrast to existing video-based monitoring systems, we introduce a second camera to the setup and exploit stereo-vision aiming of automatic calibration to the real world. Depth of the flow is estimated as the difference between distances from the camera to the water surface and from the camera to the canal's bottom. Camerato-water distance is recovered automatically using large-scale stereo matching, while the distance to the canal's bottom is measured once upon installation. Surface velocity is calculated using cross-correlation template matching. Individual natural particles in the flow are detected and tracked throughout the sequence of images recorded over a fixed time interval. Having the water level and the surface velocity estimated and knowing the geometry of the canal we calculate the discharge. The preliminary evaluation has shown that the average error of depth computation was 3 cm, while the average error of surface velocity resulted in 5 cm/s. Due to the experimental design, these errors are rough estimates: at each acquisition session the reference depth value was measured only once, although the variation in volumetric flow and the gradual transitions between the automatically detected values indicated that the actual depth level has varied. We will address this issue in the next experimental session.

  11. A stereovision model applied in bio-micromanipulation system based on stereo light microscope.

    Science.gov (United States)

    Wang, Yuezong

    2017-12-01

    A bio-micromanipulation system is designed for manipulating micro-objects with a length scale of tens or hundreds of microns based on stereo light microscope. The world coordinate reconstruction of points on the surface of micro-objects is an important goal for the micromanipulation. Traditional pinhole camera model is applied widely in macrocomputer vision. However, this model will output bad data with remarkable error if it is directly used to reconstruct three-dimensional world coordinates for stereo light microscope. Therefore, a novel and improved pinhole camera model applied in bio-micromanipulation system is proposed in this article. The new model is composed of binocular-pinhole model and error-correction model. The binocular-pinhole model is used to output the basic world coordinates. The error-correction model is used to correct the errors from the basic world coordinates and outputs the final high-precision world coordinates. The results show that the new model achieves a precision of 0.01 mm in the X direction, 0.01 mm in the Y direction, and 0.015 mm in the Z direction within a maximum reconstruction distance of 4.1 mm in the X direction, 2.9 mm in the Y direction, and 2.25 mm in the Z direction, and that traditional pinhole camera model achieves a lower and unsatisfactory precision of about 0.1 mm. © 2017 Wiley Periodicals, Inc.

  12. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  13. Stereo topography of Valhalla and Gilgamesh

    Science.gov (United States)

    Schenk, P.; McKinnon, W.; Moore, J.

    1997-03-01

    The geology and morphology of the large multiring impact structures Valhalla and Gilgamesh have been used to infer ways in which the interior structure and properties of the large icy satellites Callisto and Ganymede differ from rocky bodies. These earlier studies were made in the absence of topographic data showing the depths of large impact basins and the degree to which relief has been preserved at large and small scales. Using Voyager stereo images of these basins, we have constructed the first detailed topographic maps of these large basins. These maps reveal the absence of deep topographic depressions, but show that multi-kilometer relief is preserved near the center of Valhalla. Digital Elevation Models (DEM) of these basins were produced using an automated digital stereogrammetry program developed at LPI for use with Voyager and Viking images. The Voyager images used here were obtained from distances of 80,000 to 125,000 km. As a result, the formal vertical resolution for both Valhalla and Gilgamesh maps is about 0.5 km. Relative elevations only are mapped as no global topographic datum exists for the Galilean satellites. In addition, the stereo image models were used to remap the geology and structure of these multiring basins in detail.

  14. Explaining Polarization Reversals in STEREO Wave Data

    Science.gov (United States)

    Breneman, A.; Cattell, C.; Wygant, J.; Kersten, K.; Wilson, L, B., III; Dai, L.; Colpitts, C.; Kellogg, P. J.; Goetz, K.; Paradise, A.

    2012-01-01

    Recently Breneman et al. reported observations of large amplitude lightning and transmitter whistler mode waves from two STEREO passes through the inner radiation belt (Lpaper. We show, with a combination of observations and simulated wave superposition, that these polarization reversals are due to the beating of an incident electromagnetic whistler mode wave at 21.4 kHz and linearly polarized, symmetric lower hybrid sidebands Doppler-shifted from the incident wave by +/-200 Hz. The existence of the lower hybrid waves is consistent with the parametric decay mechanism of Lee and Kuo whereby an incident whistler mode wave decays into symmetric, short wavelength lower hybrid waves and a purely growing (zero-frequency) mode. Like the lower hybrid waves, the purely growing mode is Doppler-shifted by 200 Hz as observed on STEREO. This decay mechanism in the upper ionosphere has been previously reported at equatorial latitudes and is thought to have a direct connection with explosive spread F enhancements. As such it may represent another dissipation mechanism of VLF wave energy in the ionosphere and may help to explain a deficit of observed lightning and transmitter energy in the inner radiation belts as reported by Starks et al.

  15. Stereo matching using epipolar distance transform.

    Science.gov (United States)

    Yang, Qingxiong; Ahuja, Narendra

    2012-10-01

    In this paper, we propose a simple but effective image transform, called the epipolar distance transform, for matching low-texture regions. It converts image intensity values to a relative location inside a planar segment along the epipolar line, such that pixels in the low-texture regions become distinguishable. We theoretically prove that the transform is affine invariant, thus the transformed images can be directly used for stereo matching. Any existing stereo algorithms can be directly used with the transformed images to improve reconstruction accuracy for low-texture regions. Results on real indoor and outdoor images demonstrate the effectiveness of the proposed transform for matching low-texture regions, keypoint detection, and description for low-texture scenes. Our experimental results on Middlebury images also demonstrate the robustness of our transform for highly textured scenes. The proposed transform has a great advantage, its low computational complexity. It was tested on a MacBook Air laptop computer with a 1.8 GHz Core i7 processor, with a speed of about 9 frames per second for a video graphics array-sized image.

  16. The High Energy Telescope for STEREO

    Science.gov (United States)

    von Rosenvinge, T. T.; Reames, D. V.; Baker, R.; Hawk, J.; Nolan, J. T.; Ryan, L.; Shuman, S.; Wortman, K. A.; Mewaldt, R. A.; Cummings, A. C.; Cook, W. R.; Labrador, A. W.; Leske, R. A.; Wiedenbeck, M. E.

    2008-04-01

    The IMPACT investigation for the STEREO Mission includes a complement of Solar Energetic Particle instruments on each of the two STEREO spacecraft. Of these instruments, the High Energy Telescopes (HETs) provide the highest energy measurements. This paper describes the HETs in detail, including the scientific objectives, the sensors, the overall mechanical and electrical design, and the on-board software. The HETs are designed to measure the abundances and energy spectra of electrons, protons, He, and heavier nuclei up to Fe in interplanetary space. For protons and He that stop in the HET, the kinetic energy range corresponds to ˜13 to 40 MeV/n. Protons that do not stop in the telescope (referred to as penetrating protons) are measured up to ˜100 MeV/n, as are penetrating He. For stopping He, the individual isotopes 3He and 4He can be distinguished. Stopping electrons are measured in the energy range ˜0.7 6 MeV.

  17. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  18. LSST Camera Optics Design

    Energy Technology Data Exchange (ETDEWEB)

    Riot, V J; Olivier, S; Bauman, B; Pratuch, S; Seppala, L; Gilmore, D; Ku, J; Nordby, M; Foss, M; Antilogus, P; Morgado, N

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics will meet their performance goals.

  19. Image enhancement framework for low-resolution thermal images in visible and LWIR camera systems

    Science.gov (United States)

    Rukkanchanunt, Thapanapong; Tanaka, Masayuki; Okutomi, Masatoshi

    2017-10-01

    Infrared (IR) thermography camera became an essential tool for monitoring applications such as pedestrian detection and equipment monitoring. Most commonly used IR cameras are Long Wavelength Infrared (LWIR) cameras due to their suitable wavelength for environmental temperature. Even though the cost of LWIR cameras had been on a decline, the affordable ones only provided low-resolution images. Enhancement techniques that could be applied to visible images often failed to perform correctly on low-resolution LWIR images. Many attempts on thermal image enhancement had been on high-resolution images. Stereo calibration between visible cameras and LWIR cameras had recently been improved in term of accuracy and ease of use. Recent visible cameras and LWIR cameras are bundled into one device, giving the capability of simultaneously taking visible and LWIR images. However, few works take advantage of this camera systems. In this work, image enhancement framework for visible and LWIR camera systems is proposed. The proposed framework consists of two inter-connected modules: visible image enhancement module and LWIR image enhancement module. The enhancement technique that will be experimented is image stitching which serves two purposes: view expansion and super-resolution. The visible image enhancement module follows a regular workflow for image stitching. The intermediate results such as homography and seam carvings labels are passed to LWIR image enhancement module. The LWIR image enhancement module aligns LWIR images to visible images using stereo calibrations results and utilizes already computed homography from visible images to avoid feature extraction and matching on LWIR images. The framework is able to handle difference in image resolution between visible images and LWIR images by performing sparse pixel-to-pixel version of image alignment and image projection. Experiments show that the proposed framework leads to richer image stitching's results comparing to the

  20. Improving depth maps of plants by using a set of five cameras

    Science.gov (United States)

    Kaczmarek, Adam L.

    2015-03-01

    Obtaining high-quality depth maps and disparity maps with the use of a stereo camera is a challenging task for some kinds of objects. The quality of these maps can be improved by taking advantage of a larger number of cameras. The research on the usage of a set of five cameras to obtain disparity maps is presented. The set consists of a central camera and four side cameras. An algorithm for making disparity maps called multiple similar areas (MSA) is introduced. The algorithm was specially designed for the set of five cameras. Experiments were performed with the MSA algorithm and the stereo matching algorithm based on the sum of sum of squared differences (sum of SSD, SSSD) measure. Moreover, the following measures were included in the experiments: sum of absolute differences (SAD), zero-mean SAD (ZSAD), zero-mean SSD (ZSSD), locally scaled SAD (LSAD), locally scaled SSD (LSSD), normalized cross correlation (NCC), and zero-mean NCC (ZNCC). Algorithms presented were applied to images of plants. Making depth maps of plants is difficult because parts of leaves are similar to each other. The potential usability of the described algorithms is especially high in agricultural applications such as robotic fruit harvesting.

  1. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...

  2. Implementation of an ISIS Compatible Stereo Processing Chain for 3D Stereo Reconstruction

    Science.gov (United States)

    Tasdelen, E.; Unbekannt, H.; Willner, K.; Oberst, J.

    2012-09-01

    The department for Planetary Geodesy at TU Berlin is developing routines for photogrammetric processing of planetary image data to derive 3D representations of planetary surfaces. The ISIS software, developed by USGS, Flagstaff, is readily available, open source, and very well documented. Hence, ISIS [1] was chosen as a prime processing platform and tool kit. However, ISIS does not provide a full photogrammetric stereo processing chain. Several components like image matching, bundle block adjustment (until recently) or digital terrain model (DTM) interpolation from 3D object points are missing. Our group aims to complete this photogrammetric stereo processing chain by implementing the missing components, taking advantage of already existing ISIS classes and functionality. With this abstract we would like to report on the development of a new image matching software that is optimized for both orbital and closeranged planetary images and compatible with ISIS formats and routines and an interpolation tool that is developed to create DTMs from large 3-D point clouds.

  3. Acquisition of stereo panoramas for display in VR environments

    KAUST Repository

    Ainsworth, Richard A.

    2011-01-23

    Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods described here combine high-resolution photography with surround vision and full stereo view in an immersive environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented in all directions. The interocular distance, as seen from the viewer\\'s perspective, displaces the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking, even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo panoramas created with this acquisition and display technique can be applied without modification to a large array of VR devices having different screen arrangements and different VR libraries.

  4. Application of stereo-imaging technology to medical field.

    Science.gov (United States)

    Nam, Kyoung Won; Park, Jeongyun; Kim, In Young; Kim, Kwang Gi

    2012-09-01

    There has been continuous development in the area of stereoscopic medical imaging devices, and many stereoscopic imaging devices have been realized and applied in the medical field. In this article, we review past and current trends pertaining to the application stereo-imaging technologies in the medical field. We describe the basic principles of stereo vision and visual issues related to it, including visual discomfort, binocular disparities, vergence-accommodation mismatch, and visual fatigue. We also present a brief history of medical applications of stereo-imaging techniques, examples of recently developed stereoscopic medical devices, and patent application trends as they pertain to stereo-imaging medical devices. Three-dimensional (3D) stereo-imaging technology can provide more realistic depth perception to the viewer than conventional two-dimensional imaging technology. Therefore, it allows for a more accurate understanding and analysis of the morphology of an object. Based on these advantages, the significance of stereoscopic imaging in the medical field increases in accordance with the increase in the number of laparoscopic surgeries, and stereo-imaging technology plays a key role in the diagnoses of the detailed morphologies of small biological specimens. The application of 3D stereo-imaging technology to the medical field will help improve surgical accuracy, reduce operation times, and enhance patient safety. Therefore, it is important to develop more enhanced stereoscopic medical devices.

  5. Robust photometric stereo using structural light sources

    Science.gov (United States)

    Han, Tian-Qi; Cheng, Yue; Shen, Hui-Liang; Du, Xin

    2014-05-01

    We propose a robust photometric stereo method by using structural arrangement of light sources. In the arrangement, light sources are positioned on a planar grid and form a set of collinear combinations. The shadow pixels are detected by adaptive thresholding. The specular highlight and diffuse pixels are distinguished according to their intensity deviations of the collinear combinations, thanks to the special arrangement of light sources. The highlight detection problem is cast as a pattern classification problem and is solved using support vector machine classifiers. Considering the possible misclassification of highlight pixels, the ℓ1 regularization is further employed in normal map estimation. Experimental results on both synthetic and real-world scenes verify that the proposed method can robustly recover the surface normal maps in the case of heavy specular reflection and outperforms the state-of-the-art techniques.

  6. Explaining polarization reversals in STEREO wave data

    Science.gov (United States)

    Breneman, A.; Cattell, C.; Wygant, J.; Kersten, K.; Wilson, L. B., III; Dai, L.; Colpitts, C.; Kellogg, P. J.; Goetz, K.; Paradise, A.

    2012-04-01

    Recently, Breneman et al. (2011) reported observations of large amplitude lightning and transmitter whistler mode waves from two STEREO passes through the inner radiation belt (L plane transverse to the magnetic field showed that the transmitter waves underwent periodic polarization reversals. Specifically, their polarization would cycle through a pattern of right-hand to linear to left-hand polarization at a rate of roughly 200 Hz. The lightning whistlers were observed to be left-hand polarized at frequencies greater than the lower hybrid frequency and less than the transmitter frequency (21.4 kHz) and right-hand polarized otherwise. Only right-hand polarized waves in the inner radiation belt should exist in the frequency range of the whistler mode and these reversals were not explained in the previous paper. We show, with a combination of observations and simulated wave superposition, that these polarization reversals are due to the beating of an incident electromagnetic whistler mode wave at 21.4 kHz and linearly polarized, symmetric lower hybrid sidebands Doppler-shifted from the incident wave by ±200 Hz. The existence of the lower hybrid waves is consistent with the parametric decay mechanism of Lee and Kuo (1984) whereby an incident whistler mode wave decays into symmetric, short wavelength lower hybrid waves and a purely growing (zero-frequency) mode. Like the lower hybrid waves, the purely growing mode is Doppler-shifted by ˜200 Hz as observed on STEREO. This decay mechanism in the upper ionosphere has been previously reported at equatorial latitudes and is thought to have a direct connection with explosive spread F enhancements. As such it may represent another dissipation mechanism of VLF wave energy in the ionosphere and may help to explain a deficit of observed lightning and transmitter energy in the inner radiation belts as reported by Starks et al. (2008).

  7. Investigating the use of multi-point coupling for single-sensor bearing estimation in one direction

    Science.gov (United States)

    Woolard, Americo G.; Phoenix, Austin A.; Tarazaga, Pablo A.

    2018-04-01

    Bearing estimation of radially propagating symmetric waves in solid structures typically requires a minimum of two sensors. As a test specimen, this research investigates the use of multi-point coupling to provide directional inference using a single-sensor. By this provision, the number of sensors required for localization can be reduced. A finite-element model of a beam is constructed with a symmetrically placed bipod that has asymmetric joint-stiffness properties. Impulse loading is applied at different points along the beam, and measurements are taken from the apex of the bipod. A technique is developed to determine the direction-of-arrival of the propagating wave. The accuracy when using the bipod with the developed technique is compared against results gathered without the bipod and measuring from an asymmetric location along the beam. The results show 92% accuracy when the bipod is used, compared to 75% when measuring without the bipod from an asymmetric location. A geometry investigation finds the best accuracy results when one leg of the bipod has a low stiffness and a large diameter relative to the other leg.

  8. Wireless and simultaneous detections of multiple bio-molecules in a single sensor using Love wave biosensor.

    Science.gov (United States)

    Oh, Haekwan; Fu, Chen; Kim, Kunnyun; Lee, Keekeun

    2014-11-17

    A Love wave-based biosensor with a 440 MHz center frequency was developed for the simultaneous detection of two different analytes of Cartilage Oligomeric Matrix Protein (COMP) and rabbit immunoglobulin G (IgG) in a single sensor. The developed biosensor consists of one-port surface acoustic wave (SAW) reflective delay lines on a 41° YX LiNbO3 piezoelectric substrate, a poly(methyl methacrylate) (PMMA) waveguide layer, and two different sensitive films. The Love wave biosensor was wirelessly characterized using two antennas and a network analyzer. The binding of the analytes to the sensitive layers induced a large change in the time positions of the original reflection peaks mainly due to the mass loading effect. The assessed time shifts in the reflection peaks were matched well with the predicted values from coupling of mode (COM) modeling. The sensitivities evaluated from the sensitive films were ~15 deg/µg/mL for the rabbit IgG and ~1.8 deg/ng/mL for COMP.

  9. Wireless and Simultaneous Detections of Multiple Bio-Molecules in a Single Sensor Using Love Wave Biosensor

    Directory of Open Access Journals (Sweden)

    Haekwan Oh

    2014-11-01

    Full Text Available A Love wave-based biosensor with a 440 MHz center frequency was developed for the simultaneous detection of two different analytes of Cartilage Oligomeric Matrix Protein (COMP and rabbit immunoglobulin G (IgG in a single sensor. The developed biosensor consists of one-port surface acoustic wave (SAW reflective delay lines on a 41° YX LiNbO3 piezoelectric substrate, a poly(methyl methacrylate (PMMA waveguide layer, and two different sensitive films. The Love wave biosensor was wirelessly characterized using two antennas and a network analyzer. The binding of the analytes to the sensitive layers induced a large change in the time positions of the original reflection peaks mainly due to the mass loading effect. The assessed time shifts in the reflection peaks were matched well with the predicted values from coupling of mode (COM modeling. The sensitivities evaluated from the sensitive films were ~15 deg/µg/mL for the rabbit IgG and ~1.8 deg/ng/mL for COMP.

  10. Method for Stereo Mapping Based on Objectarx and Pipeline Technology

    Science.gov (United States)

    Liu, F.; Chen, T.; Lin, Z.; Yang, Y.

    2012-07-01

    Stereo mapping is an important way to acquire 4D production. Based on the development of the stereo mapping and the characteristics of ObjectARX and pipeline technology, a new stereo mapping scheme which can realize the interaction between the AutoCAD and digital photogrammetry system is offered by ObjectARX and pipeline technology. An experiment is made in order to make sure the feasibility with the example of the software MAP-AT (Modern Aerial Photogrammetry Automatic Triangulation), the experimental results show that this scheme is feasible and it has very important meaning for the realization of the acquisition and edit integration.

  11. METHOD FOR STEREO MAPPING BASED ON OBJECTARX AND PIPELINE TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    F. Liu

    2012-07-01

    Full Text Available Stereo mapping is an important way to acquire 4D production. Based on the development of the stereo mapping and the characteristics of ObjectARX and pipeline technology, a new stereo mapping scheme which can realize the interaction between the AutoCAD and digital photogrammetry system is offered by ObjectARX and pipeline technology. An experiment is made in order to make sure the feasibility with the example of the software MAP-AT (Modern Aerial Photogrammetry Automatic Triangulation, the experimental results show that this scheme is feasible and it has very important meaning for the realization of the acquisition and edit integration.

  12. A new method for non-invasive biomass determination based on stereo photogrammetry.

    Science.gov (United States)

    Syngelaki, Maria; Hardner, Matthias; Oberthuer, Patrick; Bley, Thomas; Schneider, Danilo; Lenk, Felix

    2018-03-01

    A novel, non-destructive method for the biomass estimation of biological samples on culture dishes was developed. To achieve this, a photogrammetric system, which consists of a digital single-lens reflex camera (DSLR), an illuminated platform where the culture dishes are positioned and an Arduino board which controls the capturing process, was constructed. The camera was mounted on a holder which set the camera at different title angles and the platform rotated, to capture images from different directions. A software, based on stereo photogrammetry, was developed for the three-dimensional (3D) reconstruction of the samples. The proof-of-concept was demonstrated in a series of experiments with plant tissue cultures and specifically with calli cultures of Salvia fruticosa and Ocimum basilicum. For a period of 14 days images of these cultures were acquired and 3D-reconstructions and volumetric data were obtained. The volumetric data correlated well with the experimental measurements and made the calculation of the specific growth rate, µ max , possible. The µ max value for S. fruticosa samples was 0.14 day -1 and for O. basilicum 0.16 day -1 . The developed method demonstrated the high potential of this photogrammetric approach in the biological sciences.

  13. Human Collaborative Localization and Mapping in Indoor Environments with Non-Continuous Stereo

    Science.gov (United States)

    Guerra, Edmundo; Munguia, Rodrigo; Bolea, Yolanda; Grau, Antoni

    2016-01-01

    A new approach to the monocular simultaneous localization and mapping (SLAM) problem is presented in this work. Data obtained from additional bearing-only sensors deployed as wearable devices is fully fused into an Extended Kalman Filter (EKF). The wearable device is introduced in the context of a collaborative task within a human-robot interaction (HRI) paradigm, including the SLAM problem. Thus, based on the delayed inverse-depth feature initialization (DI-D) SLAM, data from the camera deployed on the human, capturing his/her field of view, is used to enhance the depth estimation of the robotic monocular sensor which maps and locates the device. The occurrence of overlapping between the views of both cameras is predicted through geometrical modelling, activating a pseudo-stereo methodology which allows to instantly measure the depth by stochastic triangulation of matched points found through SIFT/SURF. Experimental validation is provided through results from experiments, where real data is captured as synchronized sequences of video and other data (relative pose of secondary camera) and processed off-line. The sequences capture indoor trajectories representing the main challenges for a monocular SLAM approach, namely, singular trajectories and close turns with high angular velocities with respect to linear velocities. PMID:26927100

  14. USING STEREO VISION TO SUPPORT THE AUTOMATED ANALYSIS OF SURVEILLANCE VIDEOS

    Directory of Open Access Journals (Sweden)

    M. Menze

    2012-07-01

    Full Text Available Video surveillance systems are no longer a collection of independent cameras, manually controlled by human operators. Instead, smart sensor networks are developed, able to fulfil certain tasks on their own and thus supporting security personnel by automated analyses. One well-known task is the derivation of people’s positions on a given ground plane from monocular video footage. An improved accuracy for the ground position as well as a more detailed representation of single salient people can be expected from a stereoscopic processing of overlapping views. Related work mostly relies on dedicated stereo devices or camera pairs with a small baseline. While this set-up is helpful for the essential step of image matching, the high accuracy potential of a wide baseline and the according good intersection geometry is not utilised. In this paper we present a stereoscopic approach, working on overlapping views of standard pan-tilt-zoom cameras which can easily be generated for arbitrary points of interest by an appropriate reconfiguration of parts of a sensor network. Experiments are conducted on realistic surveillance footage to show the potential of the suggested approach and to investigate the influence of different baselines on the quality of the derived surface model. Promising estimations of people’s position and height are retrieved. Although standard matching approaches show helpful results, future work will incorporate temporal dependencies available from image sequences in order to reduce computational effort and improve the derived level of detail.

  15. Error analysis in a stereo vision-based pedestrian detection sensor for collision avoidance applications.

    Science.gov (United States)

    Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.

  16. Robust stereo matching with trinary cross color census and triple image-based refinements

    Science.gov (United States)

    Chang, Ting-An; Lu, Xiao; Yang, Jar-Ferr

    2017-12-01

    For future 3D TV broadcasting systems and navigation applications, it is necessary to have accurate stereo matching which could precisely estimate depth map from two distanced cameras. In this paper, we first suggest a trinary cross color (TCC) census transform, which can help to achieve accurate disparity raw matching cost with low computational cost. The two-pass cost aggregation (TPCA) is formed to compute the aggregation cost, then the disparity map can be obtained by a range winner-take-all (RWTA) process and a white hole filling procedure. To further enhance the accuracy performance, a range left-right checking (RLRC) method is proposed to classify the results as correct, mismatched, or occluded pixels. Then, the image-based refinements for the mismatched and occluded pixels are proposed to refine the classified errors. Finally, the image-based cross voting and a median filter are employed to complete the fine depth estimation. Experimental results show that the proposed semi-global stereo matching system achieves considerably accurate disparity maps with reasonable computation cost.

  17. Selecting the geology filter wavelengths for the ExoMars Panoramic Camera instrument

    OpenAIRE

    Cousins, C. R.; Gunn, M.; Prosser, B. J.; Barnes, D. P.; Crawford, I. A.; Griffiths, A. D.; Davis, L. E.; Coates, A. J.

    2012-01-01

    The Panoramic Camera (PanCam) instrument will provide surface remote sensing data for the ExoMars mission. A combination of wide-angle stereo, multispectral, and high resolution imagery will generate contextual geological information to help inform which scientific targets should be selected for drilling and analysis. One component of the PanCam dataset is narrowband multispectral imaging in the visible to near infrared, which utilises a dedicated set of 12 “geology” filters of predetermined ...

  18. StereoBox: A Robust and Efficient Solution for Automotive Short-Range Obstacle Detection

    Directory of Open Access Journals (Sweden)

    Alberto Broggi

    2007-07-01

    Full Text Available This paper presents a robust method for close-range obstacle detection with arbitrarily aligned stereo cameras. System calibration is performed by means of a dense grid to remove perspective and lens distortion after a direct mapping between image pixels and world points. Obstacle detection is based on the differences between left and right images after transformation phase and with a polar histogram, it is possible to detect vertical structures and to reject noise and small objects. Found objects' world coordinates are transmitted via CAN bus; the driver can also be warned through an audio interface. The proposed algorithm can be useful in different automotive applications, requiring real-time segmentation without any assumption on background. Experimental results proved the system to be robust in several envitonmental conditions. In particular, the system has been tested to investigate presence of obstacles in blind spot areas around heavy goods vehicles (HGVs and has been mounted on three different prototypes at different heights.

  19. StereoBox: A Robust and Efficient Solution for Automotive Short-Range Obstacle Detection

    Directory of Open Access Journals (Sweden)

    Broggi Alberto

    2007-01-01

    Full Text Available This paper presents a robust method for close-range obstacle detection with arbitrarily aligned stereo cameras. System calibration is performed by means of a dense grid to remove perspective and lens distortion after a direct mapping between image pixels and world points. Obstacle detection is based on the differences between left and right images after transformation phase and with a polar histogram, it is possible to detect vertical structures and to reject noise and small objects. Found objects' world coordinates are transmitted via CAN bus; the driver can also be warned through an audio interface. The proposed algorithm can be useful in different automotive applications, requiring real-time segmentation without any assumption on background. Experimental results proved the system to be robust in several envitonmental conditions. In particular, the system has been tested to investigate presence of obstacles in blind spot areas around heavy goods vehicles (HGVs and has been mounted on three different prototypes at different heights.

  20. Dynamic Shape Capture of Free-Swimming Aquatic Life using Multi-view Stereo

    Science.gov (United States)

    Daily, David

    2017-11-01

    The reconstruction and tracking of swimming fish in the past has either been restricted to flumes, small volumes, or sparse point tracking in large tanks. The purpose of this research is to use an array of cameras to automatically track 50-100 points on the surface of a fish using the multi-view stereo computer vision technique. The method is non-invasive thus allowing the fish to swim freely in a large volume and to perform more advanced maneuvers such as rolling, darting, stopping, and reversing which have not been studied. The techniques for obtaining and processing the 3D kinematics and maneuvers of tuna, sharks, stingrays, and other species will be presented and compared. The National Aquarium and the Naval Undersea Warfare Center and.

  1. Motorcycle That See: Multifocal Stereo Vision Sensor for Advanced Safety Systems in Tilting Vehicles

    Directory of Open Access Journals (Sweden)

    Gustavo Gil

    2018-01-01

    Full Text Available Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications.

  2. Motorcycles that See: Multifocal Stereo Vision Sensor for Advanced Safety Systems in Tilting Vehicles

    Science.gov (United States)

    2018-01-01

    Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications. PMID:29351267

  3. Motorcycle That See: Multifocal Stereo Vision Sensor for Advanced Safety Systems in Tilting Vehicles.

    Science.gov (United States)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-01-19

    Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications.

  4. Educational Applications for Digital Cameras.

    Science.gov (United States)

    Cavanaugh, Terence; Cavanaugh, Catherine

    1997-01-01

    Discusses uses of digital cameras in education. Highlights include advantages and disadvantages, digital photography assignments and activities, camera features and operation, applications for digital images, accessory equipment, and comparisons between digital cameras and other digitizers. (AEF)

  5. Crater Morphometry and Crater Degradation on Mercury: Mercury Laser Altimeter (MLA) Measurements and Comparison to Stereo-DTM Derived Results

    Science.gov (United States)

    Leight, C.; Fassett, C. I.; Crowley, M. C.; Dyar, M. D.

    2017-01-01

    Two types of measurements of Mercury's surface topography were obtained by the MESSENGER (MErcury Surface Space ENvironment, GEochemisty and Ranging) spacecraft: laser ranging data from Mercury Laser Altimeter (MLA) [1], and stereo imagery from the Mercury Dual Imaging System (MDIS) camera [e.g., 2, 3]. MLA data provide precise and accurate elevation meaurements, but with sparse spatial sampling except at the highest northern latitudes. Digital terrain models (DTMs) from MDIS have superior resolution but with less vertical accuracy, limited approximately to the pixel resolution of the original images (in the case of [3], 15-75 m). Last year [4], we reported topographic measurements of craters in the D=2.5 to 5 km diameter range from stereo images and suggested that craters on Mercury degrade more quickly than on the Moon (by a factor of up to approximately 10×). However, we listed several alternative explanations for this finding, including the hypothesis that the lower depth/diameter ratios we observe might be a result of the resolution and accuracy of the stereo DTMs. Thus, additional measurements were undertaken using MLA data to examine the morphometry of craters in this diameter range and assess whether the faster crater degradation rates proposed to occur on Mercury is robust.

  6. Clustered features for use in stereo vision SLAM

    CSIR Research Space (South Africa)

    Joubert, D

    2010-07-01

    Full Text Available SLAM, or simultaneous localization and mapping, is a key component in the development of truly independent robots. Vision-based SLAM utilising stereo vision is a promising approach to SLAM but it is computationally expensive and difficult...

  7. Teater (stereo)tüüpide loojana / Anneli Saro

    Index Scriptorium Estoniae

    Saro, Anneli, 1968-

    2006-01-01

    Tutvustatakse 27. märtsil Tartu Ülikooli ajaloomuuseumis toimuva Eesti Teatriuurijate Ühenduse ning TÜ teatriteaduse ja kirjandusteooria õppetooli korraldatud konverentsi "Teater sotsiaalsete ja kultuuriliste (stereo)tüüpide loojana" teemasid

  8. MISR Level 2 TOA/Cloud Stereo parameters V002

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the Level 2 TOA/Cloud Stereo Product. It contains the Stereoscopically Derived Cloud Mask (SDCM), cloud winds, Reflecting Level Reference Altitude (RLRA),...

  9. The laser scanning camera

    International Nuclear Information System (INIS)

    Jagger, M.

    The prototype development of a novel lenseless camera is reported which utilises a laser beam scanned in a raster by means of orthogonal vibrating mirrors to illuminate the field of view. Laser light reflected from the scene is picked up by a conveniently sited photosensitive device and used to modulate the brightness of a T.V. display scanned in synchronism with the moving laser beam, hence producing a T.V. image of the scene. The camera which needs no external lighting system can act in either a wide angle mode or by varying the size and position of the raster can be made to zoom in to view in detail any object within a 40 0 overall viewing angle. The resolution and performance of the camera are described and a comparison of these aspects is made with conventional T.V. cameras. (author)

  10. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  11. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  12. The Circular Camera Movement

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    It has been an accepted precept in film theory that specific stylistic features do not express specific content. Nevertheless, it is possible to find many examples in the history of film in which stylistic features do express specific content: for instance, the circular camera movement is used re...... such as the circular camera movement. Keywords: embodied perception, embodied style, explicit narration, interpretation, style pattern, television style...

  13. Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Torsten Palfner

    2003-06-01

    Full Text Available In this paper, a compression algorithm is introduced which allows the efficient storage and transmission of stereo images. The coder uses a block-based disparity estimation/ compensation technique to decorrelate the image pair. To code both images progressively, we have adapted the wellknown SPIHT coder to stereo images. The results presented in this paper are better than any other results published so far.

  14. Stereo-separations of Peptides by Capillary Electrophoresis and Chromatography

    OpenAIRE

    sprotocols

    2014-01-01

    Authors: Afzal Hussain, Iqbal Hussain, Mohamed F. Al-Ajmi & Imran Ali ### Abstract Small peptides (di-, tri-, tetra- penta- hexa etc. and peptides) control many chemical and biological processes. The biological importance of stereomers of peptides is of great value. The stereo-separations of peptides are gaining importance in biological and medicinal sciences and pharmaceutical industries. There is a great need of experimental protocols of stereo-separations of peptides. The various c...

  15. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  16. Creation Greenhouse Environment Map Using Localization of Edge of Cultivation Platforms Based on Stereo Vision

    Directory of Open Access Journals (Sweden)

    A Nasiri

    2017-10-01

    Full Text Available Introduction Stereo vision means the capability of extracting the depth based on analysis of two images taken from different angles of one scene. The result of stereo vision is a collection of three-dimensional points which describes the details of scene proportional to the resolution of the obtained images. Vehicle automatic steering and crop growth monitoring are two important operations in agricultural precision. The essential aspects of an automated steering are position and orientation of the agricultural equipment in relation to crop row, detection of obstacles and design of path planning between the crop rows. The developed map can provide this information in the real time. Machine vision has the capabilities to perform these tasks in order to execute some operations such as cultivation, spraying and harvesting. In greenhouse environment, it is possible to develop a map and perform an automatic control by detecting and localizing the cultivation platforms as the main moving obstacle. The current work was performed to meet a method based on the stereo vision for detecting and localizing platforms, and then, providing a two-dimensional map for cultivation platforms in the greenhouse environment. Materials and Methods In this research, two webcams, made by Microsoft Corporation with the resolution of 960×544, are connected to the computer via USB2 in order to produce a stereo parallel camera. Due to the structure of cultivation platforms, the number of points in the point cloud will be decreased by extracting the only upper and lower edges of the platform. The proposed method in this work aims at extracting the edges based on depth discontinuous features in the region of platform edge. By getting the disparity image of the platform edges from the rectified stereo images and translating its data to 3D-space, the point cloud model of the environments is constructed. Then by projecting the points to XZ plane and putting local maps together

  17. Deployable Wireless Camera Penetrators

    Science.gov (United States)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    A lightweight, low-power camera dart has been designed and tested for context imaging of sampling sites and ground surveys from an aerobot or an orbiting spacecraft in a microgravity environment. The camera penetrators also can be used to image any line-of-sight surface, such as cliff walls, that is difficult to access. Tethered cameras to inspect the surfaces of planetary bodies use both power and signal transmission lines to operate. A tether adds the possibility of inadvertently anchoring the aerobot, and requires some form of station-keeping capability of the aerobot if extended examination time is required. The new camera penetrators are deployed without a tether, weigh less than 30 grams, and are disposable. They are designed to drop from any altitude with the boost in transmitting power currently demonstrated at approximately 100-m line-of-sight. The penetrators also can be deployed to monitor lander or rover operations from a distance, and can be used for surface surveys or for context information gathering from a touch-and-go sampling site. Thanks to wireless operation, the complexity of the sampling or survey mechanisms may be reduced. The penetrators may be battery powered for short-duration missions, or have solar panels for longer or intermittent duration missions. The imaging device is embedded in the penetrator, which is dropped or projected at the surface of a study site at 90 to the surface. Mirrors can be used in the design to image the ground or the horizon. Some of the camera features were tested using commercial "nanny" or "spy" camera components with the charge-coupled device (CCD) looking at a direction parallel to the ground. Figure 1 shows components of one camera that weighs less than 8 g and occupies a volume of 11 cm3. This camera could transmit a standard television signal, including sound, up to 100 m. Figure 2 shows the CAD models of a version of the penetrator. A low-volume array of such penetrator cameras could be deployed from an

  18. STEREO PHOTO HYDROFEL, A PROCESS OF MAKING SAID STEREO PHOTO HYDROGEL, POLYMERS FOR USE IN MAKING SUCH HYDROGEL AND A PHARMACEUTICAL COMPRISING SAID POLYMERS

    NARCIS (Netherlands)

    Hiemstra, C.; Zhong, Zhiyuan; Feijen, Jan

    2008-01-01

    The Invention relates to a stereo photo hydrogel formed by stereo complexed and photo cross-linked polymers, which polymers comprise at least two types of polymers having at least one hydrophilic component, at least one hydrophobic mutually stereo complexing component, and at least one of the types

  19. The STEREO IMPACT Suprathermal Electron (STE) Instrument

    Science.gov (United States)

    Lin, R. P.; Curtis, D. W.; Larson, D. E.; Luhmann, J. G.; McBride, S. E.; Maier, M. R.; Moreau, T.; Tindall, C. S.; Turin, P.; Wang, Linghua

    2008-04-01

    The Suprathermal Electron (STE) instrument, part of the IMPACT investigation on both spacecraft of NASA’s STEREO mission, is designed to measure electrons from ˜2 to ˜100 keV. This is the primary energy range for impulsive electron/3He-rich energetic particle events that are the most frequently occurring transient particle emissions from the Sun, for the electrons that generate solar type III radio emission, for the shock accelerated electrons that produce type II radio emission, and for the superhalo electrons (whose origin is unknown) that are present in the interplanetary medium even during the quietest times. These electrons are ideal for tracing heliospheric magnetic field lines back to their source regions on the Sun and for determining field line lengths, thus probing the structure of interplanetary coronal mass ejections (ICMEs) and of the ambient inner heliosphere. STE utilizes arrays of small, passively cooled thin window silicon semiconductor detectors, coupled to state-of-the-art pulse-reset front-end electronics, to detect electrons down to ˜2 keV with about 2 orders of magnitude increase in sensitivity over previous sensors at energies below ˜20 keV. STE provides energy resolution of Δ E/ E˜10 25% and the angular resolution of ˜20° over two oppositely directed ˜80°×80° fields of view centered on the nominal Parker spiral field direction.

  20. Stereo and regioselectivity in ''Activated'' tritium reactions

    International Nuclear Information System (INIS)

    Ehrenkaufer, R.L.E.; Hembree, W.C.; Wolf, A.P.

    1988-01-01

    To investigate the stereo and positional selectivity of the microwave discharge activation (MDA) method, the tritium labeling of several amino acids was undertaken. The labeling of L-valine and the diastereomeric pair L-isoleucine and L-alloisoleucine showed less than statistical labeling at the α-amino C-H position mostly with retention of configuration. Labeling predominated at the single β C-H tertiary (methyne) position. The labeling of L-valine and L-proline with and without positive charge on the α-amino group resulted in large increases in specific activity (greater than 10-fold) when positive charge was removed by labeling them as their sodium carboxylate salts. Tritium NMR of L-proline labeled both as its zwitterion and sodium salt showed also large differences in the tritium distribution within the molecule. The distribution preferences in each of the charge states are suggestive of labeling by an electrophilic like tritium species(s). 16 refs., 5 tabs

  1. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    Science.gov (United States)

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. The Dark Energy Camera

    Science.gov (United States)

    Flaugher, B.; Diehl, H. T.; Honscheid, K.; Abbott, T. M. C.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Antonik, M.; Ballester, O.; Beaufore, L.; Bernstein, G. M.; Bernstein, R. A.; Bigelow, B.; Bonati, M.; Boprie, D.; Brooks, D.; Buckley-Geer, E. J.; Campa, J.; Cardiel-Sas, L.; Castander, F. J.; Castilla, J.; Cease, H.; Cela-Ruiz, J. M.; Chappa, S.; Chi, E.; Cooper, C.; da Costa, L. N.; Dede, E.; Derylo, G.; DePoy, D. L.; de Vicente, J.; Doel, P.; Drlica-Wagner, A.; Eiting, J.; Elliott, A. E.; Emes, J.; Estrada, J.; Fausti Neto, A.; Finley, D. A.; Flores, R.; Frieman, J.; Gerdes, D.; Gladders, M. D.; Gregory, B.; Gutierrez, G. R.; Hao, J.; Holland, S. E.; Holm, S.; Huffman, D.; Jackson, C.; James, D. J.; Jonas, M.; Karcher, A.; Karliner, I.; Kent, S.; Kessler, R.; Kozlovsky, M.; Kron, R. G.; Kubik, D.; Kuehn, K.; Kuhlmann, S.; Kuk, K.; Lahav, O.; Lathrop, A.; Lee, J.; Levi, M. E.; Lewis, P.; Li, T. S.; Mandrichenko, I.; Marshall, J. L.; Martinez, G.; Merritt, K. W.; Miquel, R.; Muñoz, F.; Neilsen, E. H.; Nichol, R. C.; Nord, B.; Ogando, R.; Olsen, J.; Palaio, N.; Patton, K.; Peoples, J.; Plazas, A. A.; Rauch, J.; Reil, K.; Rheault, J.-P.; Roe, N. A.; Rogers, H.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schindler, R. H.; Schmidt, R.; Schmitt, R.; Schubnell, M.; Schultz, K.; Schurter, P.; Scott, L.; Serrano, S.; Shaw, T. M.; Smith, R. C.; Soares-Santos, M.; Stefanik, A.; Stuermer, W.; Suchyta, E.; Sypniewski, A.; Tarle, G.; Thaler, J.; Tighe, R.; Tran, C.; Tucker, D.; Walker, A. R.; Wang, G.; Watson, M.; Weaverdyck, C.; Wester, W.; Woods, R.; Yanny, B.; DES Collaboration

    2015-11-01

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel-1. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6-9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  3. THE DARK ENERGY CAMERA

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, B.; Diehl, H. T.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Buckley-Geer, E. J. [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Honscheid, K. [Center for Cosmology and Astro-Particle Physics, The Ohio State University, Columbus, OH 43210 (United States); Abbott, T. M. C.; Bonati, M. [Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, Casilla 603, La Serena (Chile); Antonik, M.; Brooks, D. [Department of Physics and Astronomy, University College London, Gower Street, London, WC1E 6BT (United Kingdom); Ballester, O.; Cardiel-Sas, L. [Institut de Física d’Altes Energies, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Barcelona (Spain); Beaufore, L. [Department of Physics, The Ohio State University, Columbus, OH 43210 (United States); Bernstein, G. M. [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 (United States); Bernstein, R. A. [Carnegie Observatories, 813 Santa Barbara St., Pasadena, CA 91101 (United States); Bigelow, B.; Boprie, D. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States); Campa, J. [Centro de Investigaciones Energèticas, Medioambientales y Tecnológicas (CIEMAT), Madrid (Spain); Castander, F. J., E-mail: diehl@fnal.gov [Institut de Ciències de l’Espai, IEEC-CSIC, Campus UAB, Facultat de Ciències, Torre C5 par-2, E-08193 Bellaterra, Barcelona (Spain); Collaboration: DES Collaboration; and others

    2015-11-15

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel{sup −1}. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  4. Self calibration of the stereo vision system of the Chang'e-3 lunar rover based on the bundle block adjustment

    Science.gov (United States)

    Zhang, Shuo; Liu, Shaochuang; Ma, Youqing; Qi, Chen; Ma, Hao; Yang, Huan

    2017-06-01

    The Chang'e-3 was the first lunar soft landing probe of China. It was composed of the lander and the lunar rover. The Chang'e-3 successful landed in the northwest of the Mare Imbrium in December 14, 2013. The lunar rover completed the movement, imaging and geological survey after landing. The lunar rover equipped with a stereo vision system which was made up of the Navcam system, the mast mechanism and the inertial measurement unit (IMU). The Navcam system composed of two cameras with the fixed focal length. The mast mechanism was a robot with three revolute joints. The stereo vision system was used to determine the position of the lunar rover, generate the digital elevation models (DEM) of the surrounding region and plan the moving paths of the lunar rover. The stereo vision system must be calibrated before use. The control field could be built to calibrate the stereo vision system in the laboratory on the earth. However, the parameters of the stereo vision system would change after the launch, the orbital changes, the braking and the landing. Therefore, the stereo vision system should be self calibrated on the moon. An integrated self calibration method based on the bundle block adjustment is proposed in this paper. The bundle block adjustment uses each bundle of ray as the basic adjustment unit and the adjustment is implemented in the whole photogrammetric region. The stereo vision system can be self calibrated with the proposed method under the unknown lunar environment and all parameters can be estimated simultaneously. The experiment was conducted in the ground lunar simulation field. The proposed method was compared with other methods such as the CAHVOR method, the vanishing point method, the Denavit-Hartenberg method, the factorization method and the weighted least-squares method. The analyzed result proved that the accuracy of the proposed method was superior to those of other methods. Finally, the proposed method was practical used to self calibrate the

  5. A global single-sensor analysis of 2002-2011 tropospheric nitrogen dioxide trends observed from space

    Science.gov (United States)

    Schneider, P.; van der A, R. J.

    2012-08-01

    A global nine-year archive of monthly tropospheric NO2 data acquired by the SCanning Imaging Absorption spectroMeter for Atmospheric CartograpHY (SCIAMACHY) instrument was analyzed with respect to trends between August 2002 and August 2011. In the past, similar studies relied on combining data from multiple sensors; however, the length of the SCIAMACHY data set now for the first time allows utilization of a consistent time series from just a single sensor for mapping NO2 trends at comparatively high horizontal resolution (0.25°). This study provides an updated analysis of global patterns in NO2 trends and finds that previously reported decreases in tropospheric NO2 over Europe and the United States as well as strong increases over China and several megacities in Asia have continued in recent years. Positive trends of up to 4.05 (±0.41) × 1015 molecules cm-2 yr-1 and up to 19.7 (±1.9) % yr-1 were found over China, with the regional mean trend being 7.3 (±3.1) % yr-1. The megacity with the most rapid relative increase was found to be Dhaka in Bangladesh. Subsequently focusing on Europe, the study further analyzes trends by country and finds significantly decreasing trends for seven countries ranging from -3.0 (±1.6) % yr-1 to -4.5 (±2.3) % yr-1. A comparison of the satellite data with station data indicates that the trends derived from both sources show substantial differences on the station scale, i.e., when comparing a station trend directly with the equivalent satellite-derived trend at the same location, but provide quite similar large-scale spatial patterns. Finally, the SCIAMACHY-derived NO2 trends are compared with equivalent trends in NO2concentration computed using the Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe (EMEP) model. The results show that the spatial patterns in trends computed from both data sources mostly agree in Central and Western Europe, whereas substantial differences

  6. Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms

    Directory of Open Access Journals (Sweden)

    Christos Ttofis

    2012-01-01

    Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.

  7. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    Science.gov (United States)

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-09-14

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  8. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    Directory of Open Access Journals (Sweden)

    Basam Musleh

    2016-09-01

    Full Text Available Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels and the vehicle environment (meters depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  9. Communities, Cameras, and Conservation

    Science.gov (United States)

    Patterson, Barbara

    2012-01-01

    Communities, Cameras, and Conservation (CCC) is the most exciting and valuable program the author has seen in her 30 years of teaching field science courses. In this citizen science project, students and community volunteers collect data on mountain lions ("Puma concolor") at four natural areas and public parks along the Front Range of Colorado.…

  10. Mars Observer camera

    Science.gov (United States)

    Malin, M. C.; Danielson, G. E.; Ingersoll, A. P.; Masursky, H.; Veverka, J.; Ravine, M. A.; Soulanille, T. A.

    1992-01-01

    The Mars Observer camera (MOC) is a three-component system (one narrow-angle and two wide-angle cameras) designed to take high spatial resolution pictures of the surface of Mars and to obtain lower spatial resolution, synoptic coverage of the planet's surface and atmosphere. The cameras are based on the 'push broom' technique; that is, they do not take 'frames' but rather build pictures, one line at a time, as the spacecraft moves around the planet in its orbit. MOC is primarily a telescope for taking extremely high resolution pictures of selected locations on Mars. Using the narrow-angle camera, areas ranging from 2.8 km x 2.8 km to 2.8 km x 25.2 km (depending on available internal digital buffer memory) can be photographed at about 1.4 m/pixel. Additionally, lower-resolution pictures (to a lowest resolution of about 11 m/pixel) can be acquired by pixel averaging; these images can be much longer, ranging up to 2.8 x 500 km at 11 m/pixel. High-resolution data will be used to study sediments and sedimentary processes, polar processes and deposits, volcanism, and other geologic/geomorphic processes.

  11. The world's fastest camera

    CERN Multimedia

    Piquepaille, Roland

    2006-01-01

    This image processor is not your typical digital camera. It took 6 years to 20 people and $6 million to build the "Regional Calorimeter Trigger"(RCT) which will be a component of the Compact Muon Solenoid (CMS) experiment, one of the detectors on the Large Hadron Collider (LHC) in Geneva, Switzerland (1 page)

  12. Camera as Cultural Critique

    DEFF Research Database (Denmark)

    Suhr, Christian

    2015-01-01

    researchers, cameras, and filmed subjects already inherently comprise analytical decisions. It is these ethnographic qualities inherent in audiovisual and photographic imagery that make it of particular value to a participatory anthropological enterprise that seeks to resist analytic closure and seeks instead...

  13. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  14. The PAU Camera

    Science.gov (United States)

    Casas, R.; Ballester, O.; Cardiel-Sas, L.; Carretero, J.; Castander, F. J.; Castilla, J.; Crocce, M.; de Vicente, J.; Delfino, M.; Fernández, E.; Fosalba, P.; García-Bellido, J.; Gaztañaga, E.; Grañena, F.; Jiménez, J.; Madrid, F.; Maiorino, M.; Martí, P.; Miquel, R.; Neissner, C.; Ponce, R.; Sánchez, E.; Serrano, S.; Sevilla, I.; Tonello, N.; Troyano, I.

    2011-11-01

    The PAU Camera (PAUCam) is a wide-field camera designed to be mounted at the William Herschel Telescope (WHT) prime focus, located at the Observatorio del Roque de los Muchachos in the island of La Palma (Canary Islands).Its primary function is to carry out a cosmological survey, the PAU Survey, covering an area of several hundred square degrees of sky. Its purpose is to determine positions and distances using photometric redshift techniques. To achieve accurate photo-z's, PAUCam will be equipped with 40 narrow-band filters covering the range from 450 to850 nm, and six broad-band filters, those of the SDSS system plus the Y band. To fully cover the focal plane delivered by the telescope optics, 18 CCDs 2k x 4k are needed. The pixels are square of 15 μ m size. The optical characteristics of the prime focus corrector deliver a field-of-view where eight of these CCDs will have an illumination of more than 95% covering a field of 40 arc minutes. The rest of the CCDs will occupy the vignetted region extending the field diameter to one degree. Two of the CCDs will be devoted to auto-guiding.This camera have some innovative features. Firstly, both the broad-band and the narrow-band filters will be placed in mobile trays, hosting 16 such filters at most. Those are located inside the cryostat at few millimeters in front of the CCDs when observing. Secondly, a pressurized liquid nitrogen tank outside the camera will feed a boiler inside the cryostat with a controlled massflow. The read-out electronics will use the Monsoon architecture, originally developed by NOAO, modified and manufactured by our team in the frame of the DECam project (the camera used in the DES Survey).PAUCam will also be available to the astronomical community of the WHT.

  15. Design of an experimental four-camera setup for enhanced 3D surface reconstruction in microsurgery

    Directory of Open Access Journals (Sweden)

    Marzi Christian

    2017-09-01

    Full Text Available Future fully digital surgical visualization systems enable a wide range of new options. Caused by optomechanical limitations a main disadvantage of today’s surgical microscopes is their incapability of providing arbitrary perspectives to more than two observers. In a fully digital microscopic system, multiple arbitrary views can be generated from a 3D reconstruction. Modern surgical microscopes allow replacing the eyepieces by cameras in order to record stereoscopic videos. A reconstruction from these videos can only contain the amount of detail the recording camera system gathers from the scene. Therefore, covered surfaces can result in a faulty reconstruction for deviating stereoscopic perspectives. By adding cameras recording the object from different angles, additional information of the scene is acquired, allowing to improve the reconstruction. Our approach is to use a fixed four-camera setup as a front-end system to capture enhanced 3D topography of a pseudo-surgical scene. This experimental setup would provide images for the reconstruction algorithms and generation of multiple observing stereo perspectives. The concept of the designed setup is based on the common main objective (CMO principle of current surgical microscopes. These systems are well established and optically mature. Furthermore, the CMO principle allows a more compact design and a lowered effort in calibration than cameras with separate optics. Behind the CMO four pupils separate the four channels which are recorded by one camera each. The designed system captures an area of approximately 28mm × 28mm with four cameras. Thus, allowing to process images of 6 different stereo perspectives. In order to verify the setup, it is modelled in silico. It can be used in further studies to test algorithms for 3D reconstruction from up to four perspectives and provide information about the impact of additionally recorded perspectives on the enhancement of a reconstruction.

  16. Combined holography and thermography in a single sensor through image-plane holography at thermal infrared wavelengths.

    Science.gov (United States)

    Georges, Marc P; Vandenrijt, Jean-François; Thizy, Cédric; Alexeenko, Igor; Pedrini, Giancarlo; Vollheim, Birgit; Lopez, Ion; Jorge, Iagoba; Rochet, Jonathan; Osten, Wolfgang

    2014-10-20

    Holographic interferometry in the thermal wavelengths range, combining a CO(2) laser and digital hologram recording with a microbolometer array based camera, allows simultaneously capturing temperature and surface shape information about objects. This is due to the fact that the holograms are affected by the thermal background emitted by objects at room temperature. We explain the setup and the processing of data which allows decoupling the two types of information. This natural data fusion can be advantageously used in a variety of nondestructive testing applications.

  17. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    Science.gov (United States)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  18. A stereo vision method based on region segmentation

    International Nuclear Information System (INIS)

    Homma, K.; Fu, K.S.

    1984-01-01

    A stereo vision method based on segmented region information is presented in this paper. Regions that have uniform image properties are segmented on stereo images. The shapes of the regions are represented by chain codes. The weighted metrics between the region chain codes are calculated to explore the shape dissimilarities. From the minimum weight transformation of codes, partial shape matching can be found by adjusting the weights for code deletion, insertion and substitution. The partial shape matching gives stereo correspondences on the region contours even though the images have occlusion, segmentation noise and distortion. The depth interpolation is executed region by region by considering the occlusion. A depth image of a real indoor scene is extracted as an application example of this method

  19. Refinement of facial reconstructive surgery by stereo-model planning.

    Science.gov (United States)

    Cheung, L K; Wong, M C M; Wong, L L S

    2002-10-01

    The development of rapid prototyping has evolved from crude milled models to laser polymerized stereolithographic models of excellent accuracy. The technology was advanced further with the recent introduction of fused deposition modelling and a three-dimensional ink-jet printing technique in stereo-model fabrication. The concept of using a three-dimensional model in planning the operation has amazed maxillofacial surgeons since its first application in grafting a skull defect in 1995. It was followed by many bright ideas for applications in the field of facial reconstructive surgery. Stereo-models may assist in diagnosis of facial fractures, joint ankylosis and even impacted teeth. The surgery can be simulated prior the operation of complex craniofacial syndromes, facial asymmetry and distraction osteogenesis. The stereo-model can be used for preparation of reconstructive plate or joint prostheses. It is of enormous value for educational teaching and as a patient information tool when obtaining the consent for surgery.

  20. A dual-adaptive support-based stereo matching algorithm

    Science.gov (United States)

    Zhang, Yin; Zhang, Yun

    2017-07-01

    Many stereo matching algorithms use fixed color thresholds and a rigid cross skeleton to segment supports (viz., Cross method), which, however, does not work well for different images. To address this issue, this paper proposes a novel dual adaptive support (viz., DAS)-based stereo matching method, which uses both appearance and shape information of a local region to segment supports automatically, and, then, integrates the DAS-based cost aggregation with the absolute difference plus census transform cost, scanline optimization and disparity refinement to develop a stereo matching system. The performance of the DAS method is also evaluated in the Middlebury benchmark and by comparing with the Cross method. The results show that the average error for the DAS method 25.06% lower than that for the Cross method, indicating that the proposed method is more accurate, with fewer parameters and suitable for parallel computing.

  1. ROS-based ground stereo vision detection: implementation and experiments.

    Science.gov (United States)

    Hu, Tianjiang; Zhao, Boxin; Tang, Dengqing; Zhang, Daibing; Kong, Weiwei; Shen, Lincheng

    This article concentrates on open-source implementation on flying object detection in cluttered scenes. It is of significance for ground stereo-aided autonomous landing of unmanned aerial vehicles. The ground stereo vision guidance system is presented with details on system architecture and workflow. The Chan-Vese detection algorithm is further considered and implemented in the robot operating systems (ROS) environment. A data-driven interactive scheme is developed to collect datasets for parameter tuning and performance evaluating. The flying vehicle outdoor experiments capture the stereo sequential images dataset and record the simultaneous data from pan-and-tilt unit, onboard sensors and differential GPS. Experimental results by using the collected dataset validate the effectiveness of the published ROS-based detection algorithm.

  2. Interaction of algorithm and implementation for analog VLSI stereo vision

    Science.gov (United States)

    Hakkarainen, J. M.; Little, James J.; Lee, Hae-Seung; Wyatt, John L., Jr.

    1991-07-01

    Design of a high-speed stereo vision system in analog VLSI technology is reported. The goal is to determine how the advantages of analog VLSI--small area, high speed, and low power-- can be exploited, and how the effects of its principal disadvantages--limited accuracy, inflexibility, and lack of storage capacity--can be minimized. Three stereo algorithms are considered, and a simulation study is presented to examine details of the interaction between algorithm and analog VLSI implementation. The Marr-Poggio-Drumheller algorithm is shown to be best suited for analog VLSI implementation. A CCD/CMOS stereo system implementation is proposed, capable of operation at 6000 image frame pairs per second for 48 X 48 images, and faster than frame rate operation on 256 X 256 binocular image pairs.

  3. MISR radiometric camera-by-camera Cloud Mask V004

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the Radiometric camera-by-camera Cloud Mask dataset. It is used to determine whether a scene is classified as clear or cloudy. A new parameter has...

  4. ROV seafloor surveys combining 5-cm lateral resolution multibeam bathymetry with color stereo photographic imagery

    Science.gov (United States)

    Caress, D. W.; Hobson, B.; Thomas, H. J.; Henthorn, R.; Martin, E. J.; Bird, L.; Rock, S. M.; Risi, M.; Padial, J. A.

    2013-12-01

    The Monterey Bay Aquarium Research Institute is developing a low altitude, high-resolution seafloor mapping capability that combines multibeam sonar with stereo photographic imagery. The goal is to obtain spatially quantitative, repeatable renderings of the seafloor with fidelity at scales of 5 cm or better from altitudes of 2-3 m. The initial test surveys using this sensor system are being conducted from a remotely operated vehicle (ROV). Ultimately we intend to field this survey system from an autonomous underwater vehicle (AUV). This presentation focuses on the current sensor configuration, methods for data processing, and results from recent test surveys. Bathymetry data are collected using a 400-kHz Reson 7125 multibeam sonar. This configuration produces 512 beams across a 135° wide swath; each beam has a 0.5° acrosstrack by 1.0° alongtrack angular width. At a 2-m altitude, the nadir beams have a 1.7-cm acrosstrack and 3.5 cm alongtrack footprint. Dual Allied Vision Technology GX1920 2.8 Mpixel color cameras provide color stereo photography of the seafloor. The camera housings have been fitted with corrective optics achieving a 90° field of view through a dome port. Illumination is provided by dual 100J xenon strobes. Position, depth, and attitude data are provided by a Kearfott SeaDevil Inertial Navigation System (INS) integrated with a 300 kHz RDI Doppler velocity log (DVL). A separate Paroscientific pressure sensor is mounted adjacent to the INS. The INS Kalman filter is aided by the DVL velocity and pressure data, achieving navigational drift rates less than 0.05% of the distance traveled during surveys. The sensors are mounted onto a toolsled fitted below MBARI's ROV Doc Ricketts with the sonars, cameras and strobes all pointed vertically down. During surveys the ROV flies at a 2-m altitude at speeds of 0.1-0.2 m/s. During a four-day R/V Western Flyer cruise in June 2013, we successfully collected multibeam and camera survey data from a 2-m altitude

  5. Obstacle detection by stereo vision of fast correlation matching

    International Nuclear Information System (INIS)

    Jeon, Seung Hoon; Kim, Byung Kook

    1997-01-01

    Mobile robot navigation needs acquiring positions of obstacles in real time. A common method for performing this sensing is through stereo vision. In this paper, indoor images are acquired by binocular vision, which contains various shapes of obstacles. From these stereo image data, in order to obtain distances to obstacles, we must deal with the correspondence problem, or get the region in the other image corresponding to the projection of the same surface region. We present an improved correlation matching method enhancing the speed of arbitrary obstacle detection. The results are faster, simple matching, robustness to noise, and improvement of precision. Experimental results under actual surroundings are presented to reveal the performance. (author)

  6. Implementation of a Self-Consistent Stereo Processing Chain for 3D Stereo Reconstruction of the Lunar Landing Sites

    Science.gov (United States)

    Tasdelen, E.; Willner, K.; Unbekannt, H.; Glaeser, P.; Oberst, J.

    2014-04-01

    The department for Planetary Geodesy at Technical University Berlin is developing routines for photogrammetric processing of planetary image data to derive 3D representations of planetary surfaces. The Integrated Software for Imagers and Spectrometers (ISIS) software (Anderson et al., 2004), developed by USGS, Flagstaff, is readily available, open source, and very well documented. Hence, ISIS was chosen as a prime processing platform and tool kit. However, ISIS does not provide a full photogrammetric stereo processing chain. Several components like image matching, bundle block adjustment (until recently) or digital terrain model (DTM) interpolation from 3D object points are missing. Our group aims to complete this photogrammetric stereo processing chain by implementing the missing components, taking advantage of already existing ISIS classes and functionality. We report here on the current status of the development of our stereo processing chain and its first application on the Lunar Apollo landing sites.

  7. SVMT: a MATLAB toolbox for stereo-vision motion tracking of motor reactivity.

    Science.gov (United States)

    Vousdoukas, M I; Perakakis, P; Idrissi, S; Vila, J

    2012-10-01

    This article presents a Matlab-based stereo-vision motion tracking system (SVMT) for the detection of human motor reactivity elicited by sensory stimulation. It is a low-cost, non-intrusive system supported by Graphical User Interface (GUI) software, and has been successfully tested and integrated in a broad array of physiological recording devices at the Human Physiology Laboratory in the University of Granada. The SVMT GUI software handles data in Matlab and ASCII formats. Internal functions perform lens distortion correction, camera geometry definition, feature matching, as well as data clustering and filtering to extract 3D motion paths of specific body areas. System validation showed geo-rectification errors below 0.5 mm, while feature matching and motion paths extraction procedures were successfully validated with manual tracking and RMS errors were typically below 2% of the movement range. The application of the system in a psychophysiological experiment designed to elicit a startle motor response by the presentation of intense and unexpected acoustic stimuli, provided reliable data probing dynamical features of motor responses and habituation to repeated stimulus presentations. The stereo-geolocation and motion tracking performance of the SVMT system were successfully validated through comparisons with surface EMG measurements of eyeblink startle, which clearly demonstrate the ability of SVMT to track subtle body movement, such as those induced by the presentation of intense acoustic stimuli. Finally, SVMT provides an efficient solution for the assessment of motor reactivity not only in controlled laboratory settings, but also in more open, ecological environments. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. Body worn camera

    Science.gov (United States)

    Aishwariya, A.; Pallavi Sudhir, Gulavani; Garg, Nemesa; Karthikeyan, B.

    2017-11-01

    A body worn camera is small video camera worn on the body, typically used by police officers to record arrests, evidence from crime scenes. It helps preventing and resolving complaints brought by members of the public; and strengthening police transparency, performance, and accountability. The main constants of this type of the system are video format, resolution, frames rate, and audio quality. This system records the video in .mp4 format with 1080p resolution and 30 frames per second. One more important aspect to while designing this system is amount of power the system requires as battery management becomes very critical. The main design challenges are Size of the Video, Audio for the video. Combining both audio and video and saving it in .mp4 format, Battery, size that is required for 8 hours of continuous recording, Security. For prototyping this system is implemented using Raspberry Pi model B.

  9. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1986-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each ring contains a plurality of scintillation detectors which are positioned around an inner circumference with a septum ring extending inwardly from the inner circumference along each outer edge of each ring. An additional septum ring is positioned in the middle of each ring of detectors and parallel to the other septa rings, whereby the inward extent of all the septa rings may be reduced by one-half and the number of detectors required in each ring is reduced. The additional septa reduces the costs of the positron camera and improves its performance

  10. The NEAT Camera Project

    Science.gov (United States)

    Jr., Ray L. Newburn

    1995-01-01

    The NEAT (Near Earth Asteroid Tracking) camera system consists of a camera head with a 6.3 cm square 4096 x 4096 pixel CCD, fast electronics, and a Sun Sparc 20 data and control computer with dual CPUs, 256 Mbytes of memory, and 36 Gbytes of hard disk. The system was designed for optimum use with an Air Force GEODSS (Ground-based Electro-Optical Deep Space Surveillance) telescope. The GEODSS telescopes have 1 m f/2.15 objectives of the Ritchey-Chretian type, designed originally for satellite tracking. Installation of NEAT began July 25 at the Air Force Facility on Haleakala, a 3000 m peak on Maui in Hawaii.

  11. Automated segmentation of the optic disc from stereo color photographs using physiologically plausible features

    NARCIS (Netherlands)

    Abramoff, M.D.; Alward, W.L.M.; Greenlee, E.C.; Shuba, L.; Kim, Chan Y.; Fingert, J.H.; Kwon, Y.H.

    2007-01-01

    PURPOSE. To evaluate a novel automated segmentation algorithm for cup-to-disc segmentation from stereo color photographs of patients with glaucoma for the measurement of glaucoma progression. METHODS. Stereo color photographs of the optic disc were obtained by using a fixed stereo-base fundus

  12. Gamma camera display system

    International Nuclear Information System (INIS)

    Stout, K.J.

    1976-01-01

    A gamma camera having an array of photomultipliers coupled via pulse shaping circuitry and a resistor weighting circuit to a display for forming an image of a radioactive subject is described. A linearizing circuit is coupled to the weighting circuit, the linearizing circuit including a nonlinear feedback circuit with diode coupling to the weighting circuit for linearizing the correspondence between points of the display and points of the subject. 4 Claims, 5 Drawing Figures

  13. Scanning gamma camera

    International Nuclear Information System (INIS)

    Engdahl, L.W.; Batter, J.F. Jr.; Stout, K.J.

    1977-01-01

    A scanning system for a gamma camera providing for the overlapping of adjacent scan paths is described. A collimator mask having tapered edges provides for a graduated reduction in intensity of radiation received by a detector thereof, the reduction in intensity being graduated in a direction normal to the scanning path to provide a blending of images of adjacent scan paths. 31 claims, 15 figures

  14. Influence of Digital Camera Errors on the Photogrammetric Image Processing

    Science.gov (United States)

    Sužiedelytė-Visockienė, Jūratė; Bručas, Domantas

    2009-01-01

    The paper deals with the calibration of digital camera Canon EOS 350D, often used for the photogrammetric 3D digitalisation and measurements of industrial and construction site objects. During the calibration data on the optical and electronic parameters, influencing the distortion of images, such as correction of the principal point, focal length of the objective, radial symmetrical and non-symmetrical distortions were obtained. The calibration was performed by means of the Tcc software implementing the polynomial of Chebichev and using a special test-field with the marks, coordinates of which are precisely known. The main task of the research - to determine how parameters of the camera calibration influence the processing of images, i. e. the creation of geometric model, the results of triangulation calculations and stereo-digitalisation. Two photogrammetric projects were created for this task. In first project the non-corrected and in the second the corrected ones, considering the optical errors of the camera obtained during the calibration, images were used. The results of analysis of the images processing is shown in the images and tables. The conclusions are given.

  15. 3D MODELING OF ARCHITECTURE BY EDGE-MATCHING AND INTEGRATING THE POINT CLOUDS OF LASER SCANNER AND THOSE OF DIGITAL CAMERA

    Directory of Open Access Journals (Sweden)

    N. Kochi

    2012-07-01

    Full Text Available We have been developing the stereo-matching method and its system by digital photogrammetry using a digital camera to make 3D Measurement of various objects. We are also developing the technology to process 3D point clouds of enormous amount obtained through Terrestrial Laser Scanner (TLS. But this time, we have developed the technology to produce a Surface-Model by detecting the 3D edges on the stereo-images of digital camera. Then we arrived to register the 3D data obtained from the stereo-images and the 3D edge data detected on the 3D point-cloud of TLS, and thus succeeded to develop the new technology to fuse the 3D data of Camera and TLS. The basic idea is to take stereo-pictures by a digital camera around the areas where the scanner cannot, because of the occlusion. The camera, with the digital photogrammetry, can acquire the data of complicated and hidden areas instantly, thus shutting out the possibility of noises in a blink. The data of the camera are then integrated into the data of the scanner to produce automatically the model of great perfection. In this presentation, therefore, we will show (1 how to detect the 3D edges on the photo images and to detect from the scanner's point-cloud, (2 how to register the data of both 3D edges to produce the unified model, (3 how to assess the accuracy and the speed of analysing process, which turned out to be quite satisfactory.

  16. A Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    Science.gov (United States)

    Buyens, Wim; van Dijk, Bas; Wouters, Jan; Moonen, Marc

    2015-10-01

    Listening to music is still one of the more challenging aspects of using a cochlear implant (CI) for most users. Simple musical structures, a clear rhythm/beat, and lyrics that are easy to follow are among the top factors contributing to music appreciation for CI users. Modifying the audio mix of complex music potentially improves music enjoyment in CI users. A stereo music preprocessing scheme is described in which vocals, drums, and bass are emphasized based on the representation of the harmonic and the percussive components in the input spectrogram, combined with the spatial allocation of instruments in typical stereo recordings. The scheme is assessed with postlingually deafened CI subjects (N = 7) using pop/rock music excerpts with different complexity levels. The scheme is capable of modifying relative instrument level settings, with the aim of improving music appreciation in CI users, and allows individual preference adjustments. The assessment with CI subjects confirms the preference for more emphasis on vocals, drums, and bass as offered by the preprocessing scheme, especially for songs with higher complexity. The stereo music preprocessing scheme has the potential to improve music enjoyment in CI users by modifying the audio mix in widespread (stereo) music recordings. Since music enjoyment in CI users is generally poor, this scheme can assist the music listening experience of CI users as a training or rehabilitation tool.

  17. Solving the uncalibrated photometric stereo problem using total variation

    DEFF Research Database (Denmark)

    Quéau, Yvain; Lauze, Francois Bernard; Durou, Jean-Denis

    2013-01-01

    In this paper we propose a new method to solve the problem of uncalibrated photometric stereo, making very weak assumptions on the properties of the scene to be reconstructed. Our goal is to solve the generalized bas-relief ambiguity (GBR) by performing a total variation regularization of both...

  18. A comparative study of fast dense stereo vision algorithms

    NARCIS (Netherlands)

    Sunyoto, H.; Mark, W. van der; Gavrila, D.M.

    2004-01-01

    With recent hardware advances, real-time dense stereo vision becomes increasingly feasible for general-purpose processors. This has important benefits for the intelligent vehicles domain, alleviating object segmentation problems when sensing complex, cluttered traffic scenes. In this paper, we

  19. Characterising atmospheric optical turbulence using stereo-SCIDAR

    Science.gov (United States)

    Osborn, James; Butterley, Tim; Föhring, Dora; Wilson, Richard

    2015-04-01

    Stereo-SCIDAR (SCIntillation Detection and Ranging) is a development to the well known SCIDAR method for characterisation of the Earth's atmospheric optical turbulence. Here we present some interesting capabilities, comparisons and results from a recent campaign on the 2.5 m Isaac Newton Telescope on La Palma.

  20. The zone of comfort: Predicting visual discomfort with stereo displays

    Science.gov (United States)

    Shibata, Takashi; Kim, Joohwan; Hoffman, David M.; Banks, Martin S.

    2012-01-01

    Recent increased usage of stereo displays has been accompanied by public concern about potential adverse effects associated with prolonged viewing of stereo imagery. There are numerous potential sources of adverse effects, but we focused on how vergence–accommodation conflicts in stereo displays affect visual discomfort and fatigue. In one experiment, we examined the effect of viewing distance on discomfort and fatigue. We found that conflicts of a given dioptric value were slightly less comfortable at far than at near distance. In a second experiment, we examined the effect of the sign of the vergence–accommodation conflict on discomfort and fatigue. We found that negative conflicts (stereo content behind the screen) are less comfortable at far distances and that positive conflicts (content in front of screen) are less comfortable at near distances. In a third experiment, we measured phoria and the zone of clear single binocular vision, which are clinical measurements commonly associated with correcting refractive error. Those measurements predicted susceptibility to discomfort in the first two experiments. We discuss the relevance of these findings for a wide variety of situations including the viewing of mobile devices, desktop displays, television, and cinema. PMID:21778252

  1. VPython: Python plus Animations in Stereo 3D

    Science.gov (United States)

    Sherwood, Bruce

    2004-03-01

    Python is a modern object-oriented programming language. VPython (http://vpython.org) is a combination of Python (http://python.org), the Numeric module from LLNL (http://www.pfdubois.com/numpy), and the Visual module created by David Scherer, all of which have been under continuous development as open source projects. VPython makes it easy to write programs that generate real-time, navigable 3D animations. The Visual module includes a set of 3D objects (sphere, cylinder, arrow, etc.), tools for creating other shapes, and support for vector algebra. The 3D renderer runs in a parallel thread, and animations are produced as a side effect of computations, freeing the programmer to concentrate on the physics. Applications include educational and research visualization. In the Fall of 2003 Hugh Fisher at the Australian National University, John Zelle at Wartburg College, and I contributed to a new stereo capability of VPython. By adding a single statement to an existing VPython program, animations can be viewed in true stereo 3D. One can choose several modes: active shutter glasses, passive polarized glasses, or colored glasses (e.g. red-cyan). The talk will demonstrate the new stereo capability and discuss the pros and cons of various schemes for display of stereo 3D for a large audience. Supported in part by NSF grant DUE-0237132.

  2. Real-Time Dense Stereo for Intelligent Vehicles

    NARCIS (Netherlands)

    Gavrila, D.M.; Mark, W. van der

    2006-01-01

    Stereo vision is an attractive passive sensing technique for obtaining three-dimensional (3-D) measurements. Recent hardware advances have given rise to a new class of real-time dense disparity estimation algorithms. This paper examines their suitability for intelligent vehicle (IV) applications. In

  3. 3D Stereo Visualization for Mobile Robot Tele-Guide

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2006-01-01

    learning and decision performance. Works in the literature have demonstrated how stereo vision contributes to improve perception of some depth cues often for abstract tasks, while little can be found about the advantages of stereoscopic visualization in mobile robot tele-guide applications. This work...

  4. Transient full-field vibration measurement using spectroscopical stereo photogrammetry.

    Science.gov (United States)

    Yue, Kaiduan; Li, Zhongke; Zhang, Ming; Chen, Shan

    2010-12-20

    Contrasted with other vibration measurement methods, a novel spectroscopical photogrammetric approach is proposed. Two colored light filters and a CCD color camera are used to achieve the function of two traditional cameras. Then a new calibration method is presented. It focuses on the vibrating object rather than the camera and has the advantage of more accuracy than traditional camera calibration. The test results have shown an accuracy of 0.02 mm.

  5. Robotic Arm Camera on Mars, with Lights Off

    Science.gov (United States)

    2008-01-01

    This approximate color image is a view of NASA's Phoenix Mars Lander's Robotic Arm Camera (RAC) as seen by the lander's Surface Stereo Imager (SSI). This image was taken on the afternoon of the 116th Martian day, or sol, of the mission (September 22, 2008). The RAC is about 8 centimeters (3 inches) tall. The SSI took images of the RAC to test both the light-emitting diodes (LEDs) and cover function. Individual images were taken in three SSI filters that correspond to the red, green, and blue LEDs one at a time. This yields proper coloring when imaging Phoenix's surrounding Martian environment. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  6. 'McMurdo' Panorama from Spirit's 'Winter Haven' (Stereo)

    Science.gov (United States)

    2006-01-01

    This 360-degree view, called the 'McMurdo' panorama, comes from the panoramic camera (Pancam) on NASA's Mars Exploration Rover Spirit. From April through October 2006, Spirit has stayed on a small hill known as 'Low Ridge.' There, the rover's solar panels are tilted toward the sun to maintain enough solar power for Spirit to keep making scientific observations throughout the winter on southern Mars. This view of the surroundings from Spirit's 'Winter Haven' is presented as a stereo anaglyph to show the scene three-dimensionally when viewed through red-blue glasses (with the red lens on the left). Oct. 26, 2006, marks Spirit's 1,000th sol of what was planned as a 90-sol mission. (A sol is a Martian day, which lasts 24 hours, 39 minutes, 35 seconds). The rover has lived through the most challenging part of its second Martian winter. Its solar power levels are rising again. Spring in the southern hemisphere of Mars will begin in early 2007. Before that, the rover team hopes to start driving Spirit again toward scientifically interesting places in the 'Inner Basin' and 'Columbia Hills' inside Gusev crater. The McMurdo panorama is providing team members with key pieces of scientific and topographic information for choosing where to continue Spirit's exploration adventure. The Pancam began shooting component images of this panorama during Spirit's sol 814 (April 18, 2006) and completed the part shown here on sol 932 (Aug. 17, 2006). The panorama was acquired using all 13 of the Pancam's color filters, using lossless compression for the red and blue stereo filters, and only modest levels of compression on the remaining filters. The overall panorama consists of 1,449 Pancam images and represents a raw data volume of nearly 500 megabytes. It is thus the largest, highest-fidelity view of Mars acquired from either rover. Additional photo coverage of the parts of the rover deck not shown here was completed on sol 980 (Oct. 5 , 2006). The team is completing the processing and

  7. Calibration of high resolution digital camera based on different photogrammetric methods

    Science.gov (United States)

    Hamid, N. F. A.; Ahmad, A.

    2014-02-01

    This paper presents method of calibrating high-resolution digital camera based on different configuration which comprised of stereo and convergent. Both methods are performed in the laboratory and in the field calibration. Laboratory calibration is based on a 3D test field where a calibration plate of dimension 0.4 m × 0.4 m with grid of targets at different height is used. For field calibration, it uses the same concept of 3D test field which comprised of 81 target points located on a flat ground and the dimension is 9 m × 9 m. In this study, a non-metric high resolution digital camera called Canon Power Shot SX230 HS was calibrated in the laboratory and in the field using different configuration for data acquisition. The aim of the calibration is to investigate the behavior of the internal digital camera whether all the digital camera parameters such as focal length, principal point and other parameters remain the same or vice-versa. In the laboratory, a scale bar is placed in the test field for scaling the image and approximate coordinates were used for calibration process. Similar method is utilized in the field calibration. For both test fields, the digital images were acquired within short period using stereo and convergent configuration. For field calibration, aerial digital images were acquired using unmanned aerial vehicle (UAV) system. All the images were processed using photogrammetric calibration software. Different calibration results were obtained for both laboratory and field calibrations. The accuracy of the results is evaluated based on standard deviation. In general, for photogrammetric applications and other applications the digital camera must be calibrated for obtaining accurate measurement or results. The best method of calibration depends on the type of applications. Finally, for most applications the digital camera is calibrated on site, hence, field calibration is the best method of calibration and could be employed for obtaining accurate

  8. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  9. Numerical Control Machine Tool Fault Diagnosis Using Hybrid Stationary Subspace Analysis and Least Squares Support Vector Machine with a Single Sensor

    Directory of Open Access Journals (Sweden)

    Chen Gao

    2017-03-01

    Full Text Available Tool fault diagnosis in numerical control (NC machines plays a significant role in ensuring manufacturing quality. However, current methods of tool fault diagnosis lack accuracy. Therefore, in the present paper, a fault diagnosis method was proposed based on stationary subspace analysis (SSA and least squares support vector machine (LS-SVM using only a single sensor. First, SSA was used to extract stationary and non-stationary sources from multi-dimensional signals without the need for independency and without prior information of the source signals, after the dimensionality of the vibration signal observed by a single sensor was expanded by phase space reconstruction technique. Subsequently, 10 dimensionless parameters in the time-frequency domain for non-stationary sources were calculated to generate samples to train the LS-SVM. Finally, the measured vibration signals from tools of an unknown state and their non-stationary sources were separated by SSA to serve as test samples for the trained SVM. The experimental validation demonstrated that the proposed method has better diagnosis accuracy than three previous methods based on LS-SVM alone, Principal component analysis and LS-SVM or on SSA and Linear discriminant analysis.

  10. The PLATO camera

    Science.gov (United States)

    Laubier, D.; Bodin, P.; Pasquier, H.; Fredon, S.; Levacher, P.; Vola, P.; Buey, T.; Bernardi, P.

    2017-11-01

    PLATO (PLAnetary Transits and Oscillation of stars) is a candidate for the M3 Medium-size mission of the ESA Cosmic Vision programme (2015-2025 period). It is aimed at Earth-size and Earth-mass planet detection in the habitable zone of bright stars and their characterisation using the transit method and the asterosismology of their host star. That means observing more than 100 000 stars brighter than magnitude 11, and more than 1 000 000 brighter than magnitude 13, with a long continuous observing time for 20 % of them (2 to 3 years). This yields a need for an unusually long term signal stability. For the brighter stars, the noise requirement is less than 34 ppm.hr-1/2, from a frequency of 40 mHz down to 20 μHz, including all sources of noise like for instance the motion of the star images on the detectors and frequency beatings. Those extremely tight requirements result in a payload consisting of 32 synchronised, high aperture, wide field of view cameras thermally regulated down to -80°C, whose data are combined to increase the signal to noise performances. They are split into 4 different subsets pointing at 4 directions to widen the total field of view; stars in the centre of that field of view are observed by all 32 cameras. 2 extra cameras are used with color filters and provide pointing measurement to the spacecraft Attitude and Orbit Control System (AOCS) loop. The satellite is orbiting the Sun at the L2 Lagrange point. This paper presents the optical, electronic and electrical, thermal and mechanical designs devised to achieve those requirements, and the results from breadboards developed for the optics, the focal plane, the power supply and video electronics.

  11. Imporved method for stereo vision-based human detection for a mobile robot following a target person

    Directory of Open Access Journals (Sweden)

    Ali, Badar

    2015-05-01

    Full Text Available Interaction between humans and robots is a fundamental need for assistive and service robots. Their ability to detect and track people is a basic requirement for interaction with human beings. This article presents a new approach to human detection and targeted person tracking by a mobile robot. Our work is based on earlier methods that used stereo vision-based tracking linked directly with Hu moment-based detection. The earlier technique was based on the assumption that only one person is present in the environment – the target person – and it was not able to handle more than this one person. In our novel method, we solved this problem by using the Haar-based human detection method, and included a target person selection step before initialising tracking. Furthermore, rather than linking the Kalman filter directly with human detection, we implemented the tracking method before the Kalman filter-based estimation. We used the Pioneer 3AT robot, equipped with stereo camera and sonars, as the test platform.

  12. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector planes positioned side-by-side around a patient area to detect radiation. Each plane includes a plurality of photomultiplier tubes and at least two rows of scintillation crystals on each photomultiplier tube extend across to adjacent photomultiplier tubes for detecting radiation from the patient area. Each row of crystals on each photomultiplier tube is offset from the other rows of crystals, and the area of each crystal on each tube in each row is different than the area of the crystals on the tube in other rows for detecting which crystal is actuated and allowing the detector to detect more inter-plane slides. The crystals are offset by an amount equal to the length of the crystal divided by the number of rows. The rows of crystals on opposite sides of the patient may be rotated 90 degrees relative to each other

  13. Junocam: Juno's Outreach Camera

    Science.gov (United States)

    Hansen, C. J.; Caplinger, M. A.; Ingersoll, A.; Ravine, M. A.; Jensen, E.; Bolton, S.; Orton, G.

    2017-11-01

    Junocam is a wide-angle camera designed to capture the unique polar perspective of Jupiter offered by Juno's polar orbit. Junocam's four-color images include the best spatial resolution ever acquired of Jupiter's cloudtops. Junocam will look for convective clouds and lightning in thunderstorms and derive the heights of the clouds. Junocam will support Juno's radiometer experiment by identifying any unusual atmospheric conditions such as hotspots. Junocam is on the spacecraft explicitly to reach out to the public and share the excitement of space exploration. The public is an essential part of our virtual team: amateur astronomers will supply ground-based images for use in planning, the public will weigh in on which images to acquire, and the amateur image processing community will help process the data.

  14. Automatic locking radioisotope camera lock

    International Nuclear Information System (INIS)

    Rosauer, P.J.

    1978-01-01

    The lock of the present invention secures the isotope source in a stored shielded condition in the camera until a positive effort has been made to open the lock and take the source outside of the camera and prevents disconnection of the source pigtail unless the source is locked in a shielded condition in the camera. It also gives a visual indication of the locked or possible exposed condition of the isotope source and prevents the source pigtail from being completely pushed out of the camera, even when the lock is released. (author)

  15. Bathymetric Structure from Motion Photogrammetry: Extracting stream bathymetry from multi-view stereo photogrammetry

    Science.gov (United States)

    Dietrich, J. T.

    2016-12-01

    Stream bathymetry is a critical variable in a number of river science applications. In larger rivers, bathymetry can be measured with instruments such as sonar (single or multi-beam), bathymetric airborne LiDAR, or acoustic doppler current profilers. However, in smaller streams with depths less than 2 meters, bathymetry is one of the more difficult variables to map at high-resolution. Optical remote sensing techniques offer several potential solutions for collecting high-resolution bathymetry. In this research, I focus on direct photogrammetric measurements of bathymetry using multi-view stereo photogrammetry, specifically Structure from Motion (SfM). The main barrier to accurate bathymetric mapping with any photogrammetric technique is correcting for the refraction of light as it passes between the two different media (air and water), which causes water depths to appear shallower than they are. I propose and test an iterative approach that calculates a series of refraction correction equations for every point/camera combination in a SfM point cloud. This new method is meant to address shortcomings of other correction techniques and works within the current preferred method for SfM data collection, oblique and highly convergent photographs. The multi-camera refraction correction presented here produces bathymetric datasets with accuracies of 0.02% of the flying height and precisions of 0.1% of the flying height. This methodology, like many fluvial remote sensing methods, will only work under ideal conditions (e.g. clear water), but it provides an additional tool for collecting high-resolution bathymetric datasets for a variety of river, coastal, and estuary systems.

  16. Dynamic Trajectory Extraction from Stereo Vision Using Fuzzy Clustering

    Science.gov (United States)

    Onishi, Masaki; Yoda, Ikushi

    In recent years, many human tracking researches have been proposed in order to analyze human dynamic trajectory. These researches are general technology applicable to various fields, such as customer purchase analysis in a shopping environment and safety control in a (railroad) crossing. In this paper, we present a new approach for tracking human positions by stereo image. We use the framework of two-stepped clustering with k-means method and fuzzy clustering to detect human regions. In the initial clustering, k-means method makes middle clusters from objective features extracted by stereo vision at high speed. In the last clustering, c-means fuzzy method cluster middle clusters based on attributes into human regions. Our proposed method can be correctly clustered by expressing ambiguity using fuzzy clustering, even when many people are close to each other. The validity of our technique was evaluated with the experiment of trajectories extraction of doctors and nurses in an emergency room of a hospital.

  17. Field study of sound exposure by personal stereo

    DEFF Research Database (Denmark)

    Ordoñez, Rodrigo Pizarro; Reuter, Karen; Hammershøi, Dorte

    2006-01-01

    A number of large scale studies suggest that the exposure level used with personal stereo systems should raise concern. High levels can be produced by most commercially available mp3 players, and they are generally used in high background noise levels (i.e., while in a bus or rain). A field study...... of habitual use, estimation of listening levels and exposure levels, and assessment of their state of hearing, by either threshold determination or OAE measurement, with a special view to the general validity of the results (uncertainty factors and their magnitude)....... on young people's habitual sound exposure to personal stereos has been carried out using a measurement method according to principles of ISO 11904-2:2004. Additionally the state of their hearing has also been assessed. This presentation deals with the methodological aspects relating to the quantification...

  18. Comparative morphometry of facial surface models obtained from a stereo vision system in a healthy population

    Science.gov (United States)

    López, Leticia; Gastélum, Alfonso; Chan, Yuk Hin; Delmas, Patrice; Escorcia, Lilia; Márquez, Jorge

    2014-11-01

    Our goal is to obtain three-dimensional measurements of craniofacial morphology in a healthy population, using standard landmarks established by a physical-anthropology specialist and picked from computer reconstructions of the face of each subject. To do this, we designed a multi-stereo vision system that will be used to create a data base of human faces surfaces from a healthy population, for eventual applications in medicine, forensic sciences and anthropology. The acquisition process consists of obtaining the depth map information from three points of views, each depth map is obtained from a calibrated pair of cameras. The depth maps are used to build a complete, frontal, triangular-surface representation of the subject face. The triangular surface is used to locate the landmarks and the measurements are analyzed with a MATLAB script. The classification of the subjects was done with the aid of a specialist anthropologist that defines specific subject indices, according to the lengths, areas, ratios, etc., of the different structures and the relationships among facial features. We studied a healthy population and the indices from this population will be used to obtain representative averages that later help with the study and classification of possible pathologies.

  19. Project Report: Reducing Color Rivalry in Imagery for Conjugated Multiple Bandpass Filter Based Stereo Endoscopy

    Science.gov (United States)

    Ream, Allen

    2011-01-01

    A pair of conjugated multiple bandpass filters (CMBF) can be used to create spatially separated pupils in a traditional lens and imaging sensor system allowing for the passive capture of stereo video. This method is especially useful for surgical endoscopy where smaller cameras are needed to provide ample room for manipulating tools while also granting improved visualizations of scene depth. The significant issue in this process is that, due to the complimentary nature of the filters, the colors seen through each filter do not match each other, and also differ from colors as seen under a white illumination source. A color correction model was implemented that included optimized filter selection, such that the degree of necessary post-processing correction was minimized, and a chromatic adaptation transformation that attempted to fix the imaged colors tristimulus indices based on the principle of color constancy. Due to fabrication constraints, only dual bandpass filters were feasible. The theoretical average color error after correction between these filters was still above the fusion limit meaning that rivalry conditions are possible during viewing. This error can be minimized further by designing the filters for a subset of colors corresponding to specific working environments.

  20. Real-time registration of video with ultrasound using stereo disparity

    Science.gov (United States)

    Wang, Jihang; Horvath, Samantha; Stetten, George; Siegel, Mel; Galeotti, John

    2012-02-01

    Medical ultrasound typically deals with the interior of the patient, with the exterior left to the original medical imaging modality, direct human vision. For the human operator scanning the patient, the view of the external anatomy is essential for correctly locating the ultrasound probe on the body and making sense of the resulting ultrasound images in their proper anatomical context. The operator, after all, is not expected to perform the scan with his eyes shut. Over the past decade, our laboratory has developed a method of fusing these two information streams in the mind of the operator, the Sonic Flashlight, which uses a half silvered mirror and miniature display mounted on an ultrasound probe to produce a virtual image within the patient at its proper location. We are now interested in developing a similar data fusion approach within the ultrasound machine itself, by, in effect, giving vision to the transducer. Our embodiment of this concept consists of an ultrasound probe with two small video cameras mounted on it, with software capable of locating the surface of an ultrasound phantom using stereo disparity between the two video images. We report its first successful operation, demonstrating a 3D rendering of the phantom's surface with the ultrasound data superimposed at its correct relative location. Eventually, automated analysis of these registered data sets may permit the scanner and its associated computational apparatus to interpret the ultrasound data within its anatomical context, much as the human operator does today.

  1. Combined Infrared Stereo and Laser Ranging Cloud Measurements from Shuttle Mission STS-85

    Science.gov (United States)

    Lancaster, Redgie S.; Spinhirne, James D.; OCStarr, David (Technical Monitor)

    2001-01-01

    Multi-angle remote sensing provides a wealth of information for earth and climate monitoring. And, as technology advances so do the options for developing instrumentation versatile enough to meet the demands associated with these types of measurements. In the current work, the multiangle measurement capability of the Infrared Spectral Imaging Radiometer is demonstrated. This instrument flew as part of mission STS-85 of the space shuttle Columbia in 1997 and was the first earth-observing radiometer to incorporate an uncooled microbolometer array detector as its image sensor. Specifically, a method for computing cloud-top height from the multi-spectral stereo measurements acquired during this flight has been developed and the results demonstrate that a vertical precision of 10.6 km was achieved. Further, the accuracy of these measurements is confirmed by comparison with coincident direct laser ranging measurements from the Shuttle Laser Altimeter. Mission STS-85 was the first space flight to combine laser ranging and thermal IR camera systems for cloud remote sensing.

  2. Fundamental Matrix of a Stereo Pair, with A Contrario Elimination of Outliers

    Directory of Open Access Journals (Sweden)

    Lionel Moisan

    2016-05-01

    Full Text Available In a stereo image pair, the fundamental matrix encodes the rigidity constraint of the scene. It combines the internal parameters of both cameras (which can be the same and their relative position and orientation. It associates to image points in one view the so-called epipolar line in the other view, which is the locus of projection of the same 3D point, whose particular position on the straight line is determined by its depth. Reducing the correspondence search to a 1D line instead of the 2D image is a large benefit enabling the computation of the dense 3D scene. The estimation of the matrix depends on at least seven pairs of corresponding points in the images. The algorithm discarding outliers presented here is a variant of the classical RANSAC (RANdom SAmple Consensus based on a contrario methodology and proposed first by Moisan and Stival in 2004 under the name ORSA. The distinguishing feature of this algorithm compared to other RANSAC variants is that the measure of validity of a set of point pairs is not its sheer number, but a combination of this number and the geometric precision of the points.

  3. The STEREO Mission: A New Approach to Space Weather Research

    Science.gov (United States)

    Kaiser, michael L.

    2006-01-01

    With the launch of the twin STEREO spacecraft in July 2006, a new capability will exist for both real-time space weather predictions and for advances in space weather research. Whereas previous spacecraft monitors of the sun such as ACE and SOH0 have been essentially on the sun-Earth line, the STEREO spacecraft will be in 1 AU orbits around the sun on either side of Earth and will be viewing the solar activity from distinctly different vantage points. As seen from the sun, the two spacecraft will separate at a rate of 45 degrees per year, with Earth bisecting the angle. The instrument complement on the two spacecraft will consist of a package of optical instruments capable of imaging the sun in the visible and ultraviolet from essentially the surface to 1 AU and beyond, a radio burst receiver capable of tracking solar eruptive events from an altitude of 2-3 Rs to 1 AU, and a comprehensive set of fields and particles instruments capable of measuring in situ solar events such as interplanetary magnetic clouds. In addition to normal daily recorded data transmissions, each spacecraft is equipped with a real-time beacon that will provide 1 to 5 minute snapshots or averages of the data from the various instruments. This beacon data will be received by NOAA and NASA tracking stations and then relayed to the STEREO Science Center located at Goddard Space Flight Center in Maryland where the data will be processed and made available within a goal of 5 minutes of receipt on the ground. With STEREO's instrumentation and unique view geometry, we believe considerable improvement can be made in space weather prediction capability as well as improved understanding of the three dimensional structure of solar transient events.

  4. Lossless Compression of Stereo Disparity Maps for 3D

    DEFF Research Database (Denmark)

    Zamarin, Marco; Forchhammer, Søren

    2012-01-01

    . The coding algorithm is based on bit-plane coding, disparity prediction via disparity warping and context-based arithmetic coding exploiting predicted disparity data. Experimental results show that the proposed compression scheme achieves average compression factors of about 48:1 for high resolution...... disparity maps for stereo pairs and outperforms different standard solutions for lossless still image compression. Moreover, it provides a progressive representation of disparity data as well as a parallelizable structure....

  5. Cellular neural networks for the stereo matching problem

    International Nuclear Information System (INIS)

    Taraglio, S.; Zanela, A.

    1997-03-01

    The applicability of the Cellular Neural Network (CNN) paradigm to the problem of recovering information on the tridimensional structure of the environment is investigated. The approach proposed is the stereo matching of video images. The starting point of this work is the Zhou-Chellappa neural network implementation for the same problem. The CNN based system we present here yields the same results as the previous approach, but without the many existing drawbacks

  6. Discriminability limits in spatio-temporal stereo block matching.

    Science.gov (United States)

    Jain, Ankit K; Nguyen, Truong Q

    2014-05-01

    Disparity estimation is a fundamental task in stereo imaging and is a well-studied problem. Recently, methods have been adapted to the video domain where motion is used as a matching criterion to help disambiguate spatially similar candidates. In this paper, we analyze the validity of the underlying assumptions of spatio-temporal disparity estimation, and determine the extent to which motion aids the matching process. By analyzing the error signal for spatio-temporal block matching under the sum of squared differences criterion and treating motion as a stochastic process, we determine the probability of a false match as a function of image features, motion distribution, image noise, and number of frames in the spatio-temporal patch. This performance quantification provides insight into when spatio-temporal matching is most beneficial in terms of the scene and motion, and can be used as a guide to select parameters for stereo matching algorithms. We validate our results through simulation and experiments on stereo video.

  7. Using Fuzzy Logic to Enhance Stereo Matching in Multiresolution Images

    Directory of Open Access Journals (Sweden)

    Marcos D. Medeiros

    2010-01-01

    Full Text Available Stereo matching is an open problem in Computer Vision, for which local features are extracted to identify corresponding points in pairs of images. The results are heavily dependent on the initial steps. We apply image decomposition in multiresolution levels, for reducing the search space, computational time, and errors. We propose a solution to the problem of how deep (coarse should the stereo measures start, trading between error minimization and time consumption, by starting stereo calculation at varying resolution levels, for each pixel, according to fuzzy decisions. Our heuristic enhances the overall execution time since it only employs deeper resolution levels when strictly necessary. It also reduces errors because it measures similarity between windows with enough details. We also compare our algorithm with a very fast multi-resolution approach, and one based on fuzzy logic. Our algorithm performs faster and/or better than all those approaches, becoming, thus, a good candidate for robotic vision applications. We also discuss the system architecture that efficiently implements our solution.

  8. Sensitivity Monitoring of the SECCHI COR1 Telescopes on STEREO

    Science.gov (United States)

    Thompson, William T.

    2018-03-01

    Measurements of bright stars passing through the fields of view of the inner coronagraphs (COR1) on board the Solar Terrestrial Relations Observatory (STEREO) are used to monitor changes in the radiometric calibration over the course of the mission. Annual decline rates are found to be 0.648 ± 0.066%/year for COR1-A on STEREO Ahead and 0.258 ± 0.060%/year for COR1-B on STEREO Behind. These rates are consistent with decline rates found for other space-based coronagraphs in similar radiation environments. The theorized cause for the decline in sensitivity is darkening of the lenses and other optical elements due to exposure to high-energy solar particles and photons, although other causes are also possible. The total decline in the COR-B sensitivity when contact with Behind was lost on 1 October 2014 was 1.7%, while COR1-A was down by 4.4%. As of 1 November 2017, the COR1-A decline is estimated to be 6.4%. The SECCHI calibration routines will be updated to take these COR1 decline rates into account.

  9. Parallel Computer System for 3D Visualization Stereo on GPU

    Science.gov (United States)

    Al-Oraiqat, Anas M.; Zori, Sergii A.

    2018-03-01

    This paper proposes the organization of a parallel computer system based on Graphic Processors Unit (GPU) for 3D stereo image synthesis. The development is based on the modified ray tracing method developed by the authors for fast search of tracing rays intersections with scene objects. The system allows significant increase in the productivity for the 3D stereo synthesis of photorealistic quality. The generalized procedure of 3D stereo image synthesis on the Graphics Processing Unit/Graphics Processing Clusters (GPU/GPC) is proposed. The efficiency of the proposed solutions by GPU implementation is compared with single-threaded and multithreaded implementations on the CPU. The achieved average acceleration in multi-thread implementation on the test GPU and CPU is about 7.5 and 1.6 times, respectively. Studying the influence of choosing the size and configuration of the computational Compute Unified Device Archi-tecture (CUDA) network on the computational speed shows the importance of their correct selection. The obtained experimental estimations can be significantly improved by new GPUs with a large number of processing cores and multiprocessors, as well as optimized configuration of the computing CUDA network.

  10. Topomapping of Mars with HRSC images, ISIS, and a commercial stereo workstation

    Science.gov (United States)

    Kirk, R. L.; Howington-Kraus, E.; Galuszka, D.; Redding, B.; Hare, T. M.

    HRSC on Mars Express [1] is the first camera designed specifically for stereo imaging to be used in mapping a planet other than the Earth. Nine detectors view the planet through a single lens to obtain four-band color coverage and stereo images at 3 to 5 distinct angles in a single pass over the target. The short interval between acquisition of the images ensures that changes that could interfere with stereo matching are minimized. The resolution of the nadir channel is 12.5 m at periapsis, poorer at higher points in the elliptical orbit. The stereo channels are typically operated at 2x coarser resolution and the color channels at 4x or 8x. Since the commencement of operations in January 2004, approximately 58% of Mars has been imaged at nadir resolutions better than 50 m/pixel. This coverage is expected to increase significantly during the recently approved extended mission of Mars Express, giving the HRSC dataset enormous potential for regional and even global mapping. Systematic processing of the HRSC images is carried out at the German Aerospace Center (DLR) in Berlin. Preliminary digital topographic models (DTMs) at 200 m/post resolution and orthorectified image products are produced in near-realtime for all orbits, by using the VICAR software system [2]. The tradeoff of universal coverage but limited DTM resolution makes these products optimal for many but not all research studies. Experiments on adaptive processing with the same software, for a limited number of orbits, have allowed DTMs of higher resolution (down to 50 m/post) to be produced [3]. In addition, numerous Co-Investigators on the HRSC team (including ourselves) are actively researching techniques to improve on the standard products, by such methods as bundle adjustment, alternate approaches to stereo DTM generation, and refinement of DTMs by photoclinometry (shape-from-shading) [4]. The HRSC team is conducting a systematic comparison of these alternative processing approaches by arranging for

  11. Stereo-Optic High Definition Imaging: A New Technology to Understand Bird and Bat Avoidance of Wind Turbines

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Evan; Goodale, Wing; Burns, Steve; Dorr, Chirs; Duron, Melissa; Gilbert, Andrew; Moratz, Reinhard; Robinson, Mark

    2017-07-21

    There is a critical need to develop monitoring tools to track aerofauna (birds and bats) in three dimensions around wind turbines. New monitoring systems will reduce permitting uncertainty by increasing the understanding of how birds and bats are interacting with wind turbines, which will improve the accuracy of impact predictions. Biodiversity Research Institute (BRI), The University of Maine Orono School of Computing and Information Science (UMaine SCIS), HiDef Aerial Surveying Limited (HiDef), and SunEdison, Inc. (formerly First Wind) responded to this need by using stereo-optic cameras with near-infrared (nIR) technology to investigate new methods for documenting aerofauna behavior around wind turbines. The stereo-optic camera system used two synchronized high-definition video cameras with fisheye lenses and processing software that detected moving objects, which could be identified in post-processing. The stereo- optic imaging system offered the ability to extract 3-D position information from pairs of images captured from different viewpoints. Fisheye lenses allowed for a greater field of view, but required more complex image rectification to contend with fisheye distortion. The ability to obtain 3-D positions provided crucial data on the trajectory (speed and direction) of a target, which, when the technology is fully developed, will provide data on how animals are responding to and interacting with wind turbines. This project was focused on testing the performance of the camera system, improving video review processing time, advancing the 3-D tracking technology, and moving the system from Technology Readiness Level 4 to 5. To achieve these objectives, we determined the size and distance at which aerofauna (particularly eagles) could be detected and identified, created efficient data management systems, improved the video post-processing viewer, and attempted refinement of 3-D modeling with respect to fisheye lenses. The 29-megapixel camera system

  12. Comparison of Stereo-PIV and Plenoptic-PIV Measurements on the Wake of a Cylinder in NASA Ground Test Facilities.

    Science.gov (United States)

    Fahringer, Timothy W.; Thurow, Brian S.; Humphreys, William M., Jr.; Bartram, Scott M.

    2017-01-01

    A series of comparison experiments have been performed using a single-camera plenoptic PIV measurement system to ascertain the systems performance capabilities in terms of suitability for use in NASA ground test facilities. A proof-of-concept demonstration was performed in the Langley Advanced Measurements and Data Systems Branch 13-inch (33- cm) Subsonic Tunnel to examine the wake of a series of cylinders at a Reynolds number of 2500. Accompanying the plenoptic-PIV measurements were an ensemble of complementary stereo-PIV measurements. The stereo-PIV measurements were used as a truth measurement to assess the ability of the plenoptic-PIV system to capture relevant 3D/3C flow field features in the cylinder wake. Six individual tests were conducted as part of the test campaign using three different cylinder diameters mounted in two orientations in the tunnel test section. This work presents a comparison of measurements with the cylinders mounted horizontally (generating a 2D flow field in the x-y plane). Results show that in general the plenoptic-PIV measurements match those produced by the stereo-PIV system. However, discrepancies were observed in extracted pro les of the fuctuating velocity components. It is speculated that spatial smoothing of the vector fields in the stereo-PIV system could account for the observed differences. Nevertheless, the plenoptic-PIV system performed extremely well at capturing the flow field features of interest and can be considered a viable alternative to traditional PIV systems in smaller NASA ground test facilities with limited optical access.

  13. The Eye of the Camera

    NARCIS (Netherlands)

    van Rompay, Thomas Johannes Lucas; Vonk, Dorette J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  14. Interactive stereo electron microscopy enhanced with virtual reality

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E.Wes; Bastacky, S.Jacob; Schwartz, Kenneth S.

    2001-12-17

    An analytical system is presented that is used to take measurements of objects perceived in stereo image pairs obtained from a scanning electron microscope (SEM). Our system operates by presenting a single stereo view that contains stereo image data obtained from the SEM, along with geometric representations of two types of virtual measurement instruments, a ''protractor'' and a ''caliper''. The measurements obtained from this system are an integral part of a medical study evaluating surfactant, a liquid coating the inner surface of the lung which makes possible the process of breathing. Measurements of the curvature and contact angle of submicron diameter droplets of a fluorocarbon deposited on the surface of airways are performed in order to determine surface tension of the air/liquid interface. This approach has been extended to a microscopic level from the techniques of traditional surface science by measuring submicrometer rather than millimeter diameter droplets, as well as the lengths and curvature of cilia responsible for movement of the surfactant, the airway's protective liquid blanket. An earlier implementation of this approach for taking angle measurements from objects perceived in stereo image pairs using a virtual protractor is extended in this paper to include distance measurements and to use a unified view model. The system is built around a unified view model that is derived from microscope-specific parameters, such as focal length, visible area and magnification. The unified view model ensures that the underlying view models and resultant binocular parallax cues are consistent between synthetic and acquired imagery. When the view models are consistent, it is possible to take measurements of features that are not constrained to lie within the projection plane. The system is first calibrated using non-clinical data of known size and resolution. Using the SEM, stereo image pairs of grids and spheres of

  15. Mars surface context cameras past, present, and future

    Science.gov (United States)

    Gunn, M. D.; Cousins, C. R.

    2016-04-01

    Mars has been the focus of robotic space exploration since the 1960s, in which time there have been over 40 missions, some successful, some not. Camera systems have been a core component of all instrument payloads sent to the Martian surface, harnessing some combination of monochrome, color, multispectral, and stereo imagery. Together, these data sets provide the geological context to a mission, which over the decades has included the characterization and spatial mapping of geological units and associated stratigraphy, charting active surface processes such as dust devils and water ice sublimation, and imaging the robotic manipulation of samples via scoops (Viking), drills (Mars Science Laboratory (MSL) Curiosity), and grinders (Mars Exploration Rovers). Through the decades, science context imaging has remained an integral part of increasingly advanced analytical payloads, with continual advances in spatial and spectral resolution, radiometric and geometric calibration, and image analysis techniques. Mars context camera design has encompassed major technological shifts, from single photomultiplier tube detectors to megapixel charged-couple devices, and from multichannel to Bayer filter color imaging. Here we review the technological capability and evolution of science context imaging instrumentation resulting from successful surface missions to Mars, and those currently in development for planned future missions.

  16. Optimization of Stereo Matching in 3D Reconstruction Based on Binocular Vision

    Science.gov (United States)

    Gai, Qiyang

    2018-01-01

    Stereo matching is one of the key steps of 3D reconstruction based on binocular vision. In order to improve the convergence speed and accuracy in 3D reconstruction based on binocular vision, this paper adopts the combination method of polar constraint and ant colony algorithm. By using the line constraint to reduce the search range, an ant colony algorithm is used to optimize the stereo matching feature search function in the proposed search range. Through the establishment of the stereo matching optimization process analysis model of ant colony algorithm, the global optimization solution of stereo matching in 3D reconstruction based on binocular vision system is realized. The simulation results show that by the combining the advantage of polar constraint and ant colony algorithm, the stereo matching range of 3D reconstruction based on binocular vision is simplified, and the convergence speed and accuracy of this stereo matching process are improved.

  17. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1982-01-01

    The invention provides a composite solid state detector for use in deriving a display, by spatial coordinate information, of the distribution or radiation emanating from a source within a region of interest, comprising several solid state detector components, each having a given surface arranged for exposure to impinging radiation and exhibiting discrete interactions therewith at given spatially definable locations. The surface of each component and the surface disposed opposite and substantially parallel thereto are associated with impedence means configured to provide for each opposed surface outputs for signals relating the given location of the interactions with one spatial coordinate parameter of one select directional sense. The detector components are arranged to provide groupings of adjacently disposed surfaces mutually linearly oriented to exhibit a common directional sense of the spatial coordinate parameter. Means interconnect at least two of the outputs associated with each of the surfaces within a given grouping for collecting the signals deriving therefrom. The invention also provides a camera system for imaging the distribution of a source of gamma radiation situated within a region of interest

  18. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    Science.gov (United States)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  19. Dense GPU-enhanced surface reconstruction from stereo endoscopic images for intraoperative registration.

    Science.gov (United States)

    Rohl, Sebastian; Bodenstedt, Sebastian; Suwelack, Stefan; Dillmann, Rudiger; Speidel, Stefanie; Kenngott, Hannes; Muller-Stich, Beat P

    2012-03-01

    In laparoscopic surgery, soft tissue deformations substantially change the surgical site, thus impeding the use of preoperative planning during intraoperative navigation. Extracting depth information from endoscopic images and building a surface model of the surgical field-of-view is one way to represent this constantly deforming environment. The information can then be used for intraoperative registration. Stereo reconstruction is a typical problem within computer vision. However, most of the available methods do not fulfill the specific requirements in a minimally invasive setting such as the need of real-time performance, the problem of view-dependent specular reflections and large curved areas with partly homogeneous or periodic textures and occlusions. In this paper, the authors present an approach toward intraoperative surface reconstruction based on stereo endoscopic images. The authors describe our answer to this problem through correspondence analysis, disparity correction and refinement, 3D reconstruction, point cloud smoothing and meshing. Real-time performance is achieved by implementing the algorithms on the gpu. The authors also present a new hybrid cpu-gpu algorithm that unifies the advantages of the cpu and the gpu version. In a comprehensive evaluation using in vivo data, in silico data from the literature and virtual data from a newly developed simulation environment, the cpu, the gpu, and the hybrid cpu-gpu versions of the surface reconstruction are compared to a cpu and a gpu algorithm from the literature. The recommended approach toward intraoperative surface reconstruction can be conducted in real-time depending on the image resolution (20 fps for the gpu and 14fps for the hybrid cpu-gpu version on resolution of 640 × 480). It is robust to homogeneous regions without texture, large image changes, noise or errors from camera calibration, and it reconstructs the surface down to sub millimeter accuracy. In all the experiments within the

  20. CCD TV camera, TM1300

    International Nuclear Information System (INIS)

    Takano, Mitsuo; Endou, Yukio; Nakayama, Hideo

    1982-01-01

    Development has been made of a black-and-white TV camera TM 1300 using an interline-transfer CCD, which excels in performance frame-transfer CCDs marketed since 1980: it has a greater number of horizontal picture elements and far smaller input power (less than 2 W at 9 V), uses hybrid ICs for the CCD driver unit to reduce the size of the camera, has no picture distortion, no burn-in; in addition, it has peripheral equipment, such as the camera housing and the pan and till head miniaturized as well. It is also expected to be widened in application to industrial TV. (author)

  1. High Quality Camera Surveillance System

    OpenAIRE

    Helaakoski, Ari

    2015-01-01

    Oulu University of Applied Sciences Information Technology Author: Ari Helaakoski Title of the master’s thesis: High Quality Camera Surveillance System Supervisor: Kari Jyrkkä Term and year of completion: Spring 2015 Number of pages: 31 This master’s thesis was commissioned by iProtoXi Oy and it was done to one iProtoXi customer. The aim of the thesis was to make a camera surveillance system which is using a High Quality camera with pan and tilt possibility. It should b...

  2. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  3. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    Science.gov (United States)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  4. 4-mm-diameter three-dimensional imaging endoscope with steerable camera for minimally invasive surgery (3-D-MARVEL).

    Science.gov (United States)

    Bae, Sam Y; Korniski, Ronald J; Shearn, Michael; Manohara, Harish M; Shahinian, Hrayr

    2017-01-01

    High-resolution three-dimensional (3-D) imaging (stereo imaging) by endoscopes in minimally invasive surgery, especially in space-constrained applications such as brain surgery, is one of the most desired capabilities. Such capability exists at larger than 4-mm overall diameters. We report the development of a stereo imaging endoscope of 4-mm maximum diameter, called Multiangle, Rear-Viewing Endoscopic Tool (MARVEL) that uses a single-lens system with complementary multibandpass filter (CMBF) technology to achieve 3-D imaging. In addition, the system is endowed with the capability to pan from side-to-side over an angle of [Formula: see text], which is another unique aspect of MARVEL for such a class of endoscopes. The design and construction of a single-lens, CMBF aperture camera with integrated illumination to generate 3-D images, and the actuation mechanism built into it is summarized.

  5. Stereo vision-based tracking of soft tissue motion with application to online ablation control in laser microsurgery.

    Science.gov (United States)

    Schoob, Andreas; Kundrat, Dennis; Kahrs, Lüder A; Ortmaier, Tobias

    2017-08-01

    Recent research has revealed that image-based methods can enhance accuracy and safety in laser microsurgery. In this study, non-rigid tracking using surgical stereo imaging and its application to laser ablation is discussed. A recently developed motion estimation framework based on piecewise affine deformation modeling is extended by a mesh refinement step and considering texture information. This compensates for tracking inaccuracies potentially caused by inconsistent feature matches or drift. To facilitate online application of the method, computational load is reduced by concurrent processing and affine-invariant fusion of tracking and refinement results. The residual latency-dependent tracking error is further minimized by Kalman filter-based upsampling, considering a motion model in disparity space. Accuracy is assessed in laparoscopic, beating heart, and laryngeal sequences with challenging conditions, such as partial occlusions and significant deformation. Performance is compared with that of state-of-the-art methods. In addition, the online capability of the method is evaluated by tracking two motion patterns performed by a high-precision parallel-kinematic platform. Related experiments are discussed for tissue substitute and porcine soft tissue in order to compare performances in an ideal scenario and in a setup mimicking clinical conditions. Regarding the soft tissue trial, the tracking error can be significantly reduced from 0.72 mm to below 0.05 mm with mesh refinement. To demonstrate online laser path adaptation during ablation, the non-rigid tracking framework is integrated into a setup consisting of a surgical Er:YAG laser, a three-axis scanning unit, and a low-noise stereo camera. Regardless of the error source, such as laser-to-camera registration, camera calibration, image-based tracking, and scanning latency, the ablation root mean square error is kept below 0.21 mm when the sample moves according to the aforementioned patterns. Final

  6. Review of literature on hearing damage by personal stereo

    DEFF Research Database (Denmark)

    Hammershøi, Dorte; Ordoñez, Rodrigo Pizarro

    2006-01-01

    of ISO 11904-1:2002 and 11904-2:2004, previous studies can be viewed in a different light, and the results point, in our opinion, at levels and listening habits that are of hazard to the hearing. The present paper will review previous studies that may shed light over the levels and habits of contemporary......In the 1980s and 1990s there was a general concern for the high levels that personal stereo systems were capable of producing. At that time no standardized method for the determination of exposure levels existed, which could have contributed to overly conservative conclusions. With the publication...

  7. Real-time Loudspeaker Distance Estimation with Stereo Audio

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Gaubitch, Nikolay; Heusdens, Richard

    2015-01-01

    Knowledge on how a number of loudspeakers are positioned relative to a listening position can be used to enhance the listening experience. Usually, these loudspeaker positions are estimated using calibration signals, either audible or psycho-acoustically hidden inside the desired audio signal....... In this paper, we propose to use the desired audio signal instead. Specifically, we treat the case of estimating the distance between two loudspeakers playing back a stereo music or speech signal. In this connection, we develop a real-time maximum likelihood estimator and demonstrate that it has a variance...

  8. Multiview specular stereo reconstruction of large mirror surfaces

    KAUST Repository

    Balzer, Jonathan

    2011-06-01

    In deflectometry, the shape of mirror objects is recovered from distorted images of a calibrated scene. While remarkably high accuracies are achievable, state-of-the-art methods suffer from two distinct weaknesses: First, for mainly constructive reasons, these can only capture a few square centimeters of surface area at once. Second, reconstructions are ambiguous i.e. infinitely many surfaces lead to the same visual impression. We resolve both of these problems by introducing the first multiview specular stereo approach, which jointly evaluates a series of overlapping deflectometric images. Two publicly available benchmarks accompany this paper, enabling us to numerically demonstrate viability and practicability of our approach. © 2011 IEEE.

  9. Optimization on shape curves with application to specular stereo

    KAUST Repository

    Balzer, Jonathan

    2010-01-01

    We state that a one-dimensional manifold of shapes in 3-space can be modeled by a level set function. Finding a minimizer of an independent functional among all points on such a shape curve has interesting applications in computer vision. It is shown how to replace the commonly encountered practice of gradient projection by a projection onto the curve itself. The outcome is an algorithm for constrained optimization, which, as we demonstrate theoretically and numerically, provides some important benefits in stereo reconstruction of specular surfaces. © 2010 Springer-Verlag.

  10. Cavalier perspective plots of two-dimensional matrices. Program Stereo

    International Nuclear Information System (INIS)

    Los Arcos Merino, J.M.

    1978-01-01

    The program Stereo allows representation of a two-dimensional matrix containing numerical data, in the form of a cavalier perspective, isometric or not, with an angle variable between 0 deg and 180 deg. The representation is in histogram form for each matrix row and those curves which fall behind higher curves and therefore would not be seen are suppressed. It has been written in Fortran V for a Calcomp-936 digital plotter operating off-line with a Univac 1106 computer. Drawing method, subroutine structure and running instructions are described in this paper. (author)

  11. Analyzer for gamma cameras diagnostic

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Osorio Deliz, J. F.; Diaz Garcia, A.

    2013-01-01

    This research work was carried out to develop an analyzer for gamma cameras diagnostic. It is composed of an electronic system that includes hardware and software capabilities, and operates from the acquisition of the 4 head position signals of a gamma camera detector. The result is the spectrum of the energy delivered by nuclear radiation coming from the camera detector head. This system includes analog processing of position signals from the camera, digitization and the subsequent processing of the energy signal in a multichannel analyzer, sending data to a computer via a standard USB port and processing of data in a personal computer to obtain the final histogram. The circuits are composed of an analog processing board and a universal kit with micro controller and programmable gate array. (Author)

  12. New generation of meteorology cameras

    Science.gov (United States)

    Janout, Petr; Blažek, Martin; Páta, Petr

    2017-12-01

    A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.

  13. Astronomy and the camera obscura

    Science.gov (United States)

    Feist, M.

    2000-02-01

    The camera obscura (from Latin meaning darkened chamber) is a simple optical device with a long history. In the form considered here, it can be traced back to 1550. It had its heyday during the Victorian era when it was to be found at the seaside as a tourist attraction or sideshow. It was also used as an artist's drawing aid and, in 1620, the famous astronomer-mathematician, Johannes Kepler used a small tent camera obscura to trace the scenery.

  14. The development of radiation hardened robot for nuclear facility - Stereo cursor generation and a development of object distance information extracting technique

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Sang Ho; Sohng, In Tae [Inje University, Pusan (Korea); Kwon, Ki Ku [Kyungpook National University, Taegu (Korea)

    2000-03-01

    An object distance information extractor using stereo cursor in stereo imaging system is developed and implemented. The stereo cursor is overlaid on a stereoscopic video image, and is controlled by three dimensional joystick. The depth of stereo cursor is controlled by adjusting disparity of the stereo cursor. A stereo object can be selected by placing the stereo cursor at all point in image. The object distance is inversely proportional to disparity of the cursor. By measuring the amount disparity of stereo cursor, therefore, we can estimate the object distance simultaneously. The object distance is displayed to 7-segment LED by a lookup table method. 17 refs., 40 figs., 2 tabs. (Author)

  15. Science, conservation, and camera traps

    Science.gov (United States)

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  16. Estimation of the energy ratio between primary and ambience components in stereo audio data

    NARCIS (Netherlands)

    Harma, A.S.

    2011-01-01

    Stereo audio signal is often modeled as a mixture of instantaneously mixed primary components and uncorrelated ambience components. This paper focuses on the estimation of the primary-to-ambience energy ratio, PAR. This measure is useful for signal decomposition in stereo and multichannel audio

  17. Benefits, limitations, and guidelines for application of stereo 3-D display technology to the cockpit environment

    Science.gov (United States)

    Williams, Steven P.; Parrish, Russell V.; Busquets, Anthony M.

    1992-01-01

    A survey of research results from a program initiated by NASA Langley Research Center is presented. The program addresses stereo 3-D pictorial displays from a comprehensive standpoint. Human factors issues, display technology aspects, and flight display applications are also considered. Emphasis is placed on the benefits, limitations, and guidelines for application of stereo 3-D display technology to the cockpit environment.

  18. A non-convex variational approach to photometric stereo under inaccurate lighting

    DEFF Research Database (Denmark)

    Quéau, Yvain; Wu, Tao; Lauze, Francois Bernard

    2017-01-01

    This paper tackles the photometric stereo problem in the presence of inaccurate lighting, obtained either by calibration or by an uncalibrated photometric stereo method. Based on a precise modeling of noise and outliers, a robust variational approach is introduced. It explicitly accounts for self...

  19. On the Benefits of Stereo Graphics in Virtual Obstacle Avoidance Tasks

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Stenholt, Rasmus

    2014-01-01

    In virtual reality, stereo graphics is a very common way of increasing the level of perceptual realism in the visual part of the experience. However, stereo graphics comes at cost, both in technical terms and from a user perspective. In this paper, we present the preliminary results...

  20. REGION BASED FOREST CHANGE DETECTION FROM CARTOSAT-1 STEREO IMAGERY

    Directory of Open Access Journals (Sweden)

    J. Tian

    2012-09-01

    Full Text Available Tree height is a fundamental parameter for describing the forest situation and changes. The latest development of automatic Digital Surface Model (DSM generation techniques allows new approaches of forest change detection from satellite stereo imagery. This paper shows how DSMs can support the change detection in forest area. A novel region based forest change detection method is proposed using single-channel CARTOSAT-1 stereo imagery. In the first step, DSMs from two dates are generated based on automatic matching technology. After co-registration and normalising by using LiDAR data, the mean-shift segmentation is applied to the original pan images, and the images of both dates are classified to forest and non-forest areas by analysing their histograms and height differences. In the second step, a rough forest change detection map is generated based on the comparison of the two forest map. Then the GLCM texture from the nDSM and the Cartosat-1 images of the resulting regions are analyzed and compared, the real changes are extracted by SVM based classification.

  1. Feature Augmentation for Learning Confidence Measure in Stereo Matching.

    Science.gov (United States)

    Kim, Sunok; Min, Dongbo; Kim, Seungryong; Sohn, Kwanghoon

    2017-09-08

    Confidence estimation is essential for refining stereo matching results through a post-processing step. This problem has recently been studied using a learning-based approach, which demonstrates a substantial improvement on conventional simple non-learning based methods. However, the formulation of learning-based methods that individually estimates the confidence of each pixel disregards spatial coherency that might exist in the confidence map, thus providing a limited performance under challenging conditions. Our key observation is that the confidence features and resulting confidence maps are smoothly varying in the spatial domain, and highly correlated within the local regions of an image. We present a new approach that imposes spatial consistency on the confidence estimation. Specifically, a set of robust confidence features is extracted from each superpixel decomposed using the Gaussian mixture model (GMM), and then these features are concatenated with pixel-level confidence features. The features are then enhanced through adaptive filtering in the feature domain. In addition, the resulting confidence map, estimated using the confidence features with a random regression forest, is further improved through K-nearest neighbor (K-NN) based aggregation scheme on both pixel-and superpixel-level. To validate the proposed confidence estimation scheme, we employ cost modulation or ground control points (GCPs) based optimization in stereo matching. Experimental results demonstrate that the proposed method outperforms state-of-the-art approaches on various benchmarks including challenging outdoor scenes.

  2. Interpreting Dust Impact Signals Detected by the STEREO Spacecraft

    Science.gov (United States)

    O'Shea, E.; Sternovsky, Z.; Malaspina, D. M.

    2017-12-01

    There is no comprehensive understanding yet of how dust impacts on spacecraft (SC) generate signals detected by antenna instruments. The high sensitivity of the S/WAVES instrument and the large number and high diversity of dust impacts detected make the STEREO mission particularly well suited for a closer investigation. A floating perturbation model (FPP) was recently proposed to explain the characteristic shape of dust impact signals with an overshoot. The FPP model posits that the overshoot is due to the different discharge time constant of the SC and the individual antennas. Kinetic simulations are performed to demonstrate that, contrary to common belief, antennas are inefficient collectors of charged particles from impact plasmas. The collection efficiency is small, only 0.1-1%, varying weakly with the bias potential between the antenna and the SC, and more strongly with impact location. The low recollection efficiencies and an analysis of the shapes and scaling of typical and atypical signals recorded by S/WAVES suggest that, besides the mechanism described by the FPP model, there is another, possibly stronger mechanism that is responsible for generating the characteristic overshoot for most dust impact signals observed by STEREO.

  3. Teaching microsurgery to undergraduate medical students by means of high-definition stereo video microscopy: the Aachen skills lab experience

    Science.gov (United States)

    Ilgner, Justus; Park, Jonas Jae-Hyun; Westhofen, Martin

    2008-02-01

    Introduction: The master plan for innovative medical education established at RWTH Aachen Medical Faculty helped to set up an inter-disciplinary, interactive teaching environment for undergraduate medical students during their clinical course. This study presents our first experience with teaching microsurgery to medical students by means of highdefinition stereo video monitoring. Material and methods: A plastic model created for ear inspection with a handheld otoscope was modified with an exchangeable membrane resembling an eardrum plus a model of the human cochlea. We attached a 1280×1024 HD stereo camera to an operating microscope, whose images were processed online by a PC workstation. The live image was displayed by two LCD projectors @ 1280×720 pixels on a 1,25m rear-projection screen by polarized filters. Each medical student was asked to perform standard otosurgical procedures (paracentesis and insertion of grommets; insertion of a cochlear implant electrode) being guided by the HD stereoscopic video image. Results: Students quickly adopted this method of training, as all attendants shared the same high-definition stereoscopic image. The learning process of coordinating hand movement with visual feedback was regarded being challenging as well as instructive by all students. Watching the same image facilitated valuable feedback from the audience for each student performing his tasks. All students noted that this course made them feel more confident in their manual skills and that they would consider a career in a microsurgical specialty. Conclusion: High definition stereoscopy provides an easy access to microsurgical techniques for undergraduate medical students. This access not only bears the potential to compress the learning curve for junior doctors during their clinical training but also helps to attract medical students to a career in a microsurgical specialty.

  4. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  5. SUB-CAMERA CALIBRATION OF A PENTA-CAMERA

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-03-01

    Full Text Available Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors

  6. Optical low-cost and portable arrangement for full field 3D displacement measurement using a single camera

    Science.gov (United States)

    López-Alba, E.; Felipe-Sesé, L.; Schmeer, S.; Díaz, F. A.

    2016-11-01

    In the current paper, an optical low-cost system for 3D displacement measurement based on a single camera and 3D digital image correlation is presented. The conventional 3D-DIC set-up based on a two-synchronized-cameras system is compared with a proposed pseudo-stereo portable system that employs a mirror system integrated in a device for a straightforward application achieving a novel handle and flexible device for its use in many scenarios. The proposed optical system splits the image by the camera into two stereo images of the object. In order to validate this new approach and quantify its uncertainty compared to traditional 3D-DIC systems, solid rigid in and out-of-plane displacements experiments have been performed and analyzed. The differences between both systems have been studied employing an image decomposition technique which performs a full image comparison. Therefore, results of all field of view are compared with those using a stereoscopy system and 3D-DIC, discussing the accurate results obtained with the proposed device not having influence any distortion or aberration produced by the mirrors. Finally, the adaptability of the proposed system and its accuracy has been tested performing quasi-static and dynamic experiments using a silicon specimen under high deformation. Results have been compared and validated with those obtained from a conventional stereoscopy system showing an excellent level of agreement.

  7. RPC Stereo Processor (rsp) - a Software Package for Digital Surface Model and Orthophoto Generation from Satellite Stereo Imagery

    Science.gov (United States)

    Qin, R.

    2016-06-01

    Large-scale Digital Surface Models (DSM) are very useful for many geoscience and urban applications. Recently developed dense image matching methods have popularized the use of image-based very high resolution DSM. Many commercial/public tools that implement matching methods are available for perspective images, but there are rare handy tools for satellite stereo images. In this paper, a software package, RPC (rational polynomial coefficient) stereo processor (RSP), is introduced for this purpose. RSP implements a full pipeline of DSM and orthophoto generation based on RPC modelled satellite imagery (level 1+), including level 2 rectification, geo-referencing, point cloud generation, pan-sharpen, DSM resampling and ortho-rectification. A modified hierarchical semi-global matching method is used as the current matching strategy. Due to its high memory efficiency and optimized implementation, RSP can be used in normal PC to produce large format DSM and orthophotos. This tool was developed for internal use, and may be acquired by researchers for academic and non-commercial purpose to promote the 3D remote sensing applications.

  8. The fly's eye camera system

    Science.gov (United States)

    Mészáros, L.; Pál, A.; Csépány, G.; Jaskó, A.; Vida, K.; Oláh, K.; Mezö, G.

    2014-12-01

    We introduce the Fly's Eye Camera System, an all-sky monitoring device intended to perform time domain astronomy. This camera system design will provide complementary data sets for other synoptic sky surveys such as LSST or Pan-STARRS. The effective field of view is obtained by 19 cameras arranged in a spherical mosaic form. These individual cameras of the device stand on a hexapod mount that is fully capable of achieving sidereal tracking for the subsequent exposures. This platform has many advantages. First of all it requires only one type of moving component and does not include unique parts. Hence this design not only eliminates problems implied by unique elements, but the redundancy of the hexapod allows smooth operations even if one or two of the legs are stuck. In addition, it can calibrate itself by observed stars independently from both the geographical location (including northen and southern hemisphere) and the polar alignment of the full mount. All mechanical elements and electronics are designed within the confines of our institute Konkoly Observatory. Currently, our instrument is in testing phase with an operating hexapod and reduced number of cameras.

  9. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  10. EDICAM (Event Detection Intelligent Camera)

    Energy Technology Data Exchange (ETDEWEB)

    Zoletnik, S. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Szabolics, T., E-mail: szabolics.tamas@wigner.mta.hu [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Kocsis, G.; Szepesi, T.; Dunai, D. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary)

    2013-10-15

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator.

  11. Near Real-Time Estimation of Super-Resolved Depth and All-In-Focus Images from a Plenoptic Camera Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    J. P. Lüke

    2010-01-01

    Full Text Available Depth range cameras are a promising solution for the 3DTV production chain. The generation of color images with their accompanying depth value simplifies the transmission bandwidth problem in 3DTV and yields a direct input for autostereoscopic displays. Recent developments in plenoptic video-cameras make it possible to introduce 3D cameras that operate similarly to traditional cameras. The use of plenoptic cameras for 3DTV has some benefits with respect to 3D capture systems based on dual stereo cameras since there is no need for geometric and color calibration or frame synchronization. This paper presents a method for simultaneously recovering depth and all-in-focus images from a plenoptic camera in near real time using graphics processing units (GPUs. Previous methods for 3D reconstruction using plenoptic images suffered from the drawback of low spatial resolution. A method that overcomes this deficiency is developed on parallel hardware to obtain near real-time 3D reconstruction with a final spatial resolution of 800×600 pixels. This resolution is suitable as an input to some autostereoscopic displays currently on the market and shows that real-time 3DTV based on plenoptic video-cameras is technologically feasible.

  12. Streak cameras and their applications

    International Nuclear Information System (INIS)

    Bernet, J.M.; Imhoff, C.

    1987-01-01

    Over the last several years, development of various measurement techniques in the nanosecond and pico-second range has led to increased reliance on streak cameras. This paper will present the main electronic and optoelectronic performances of the Thomson-CSF TSN 506 cameras and their associated devices used to build an automatic image acquisition and processing system (NORMA). A brief survey of the diversity and the spread of the use of high speed electronic cinematography will be illustrated by a few typical applications [fr

  13. A novel, fast and efficient single-sensor automatic sleep-stage classification based on complementary cross-frequency coupling estimates.

    Science.gov (United States)

    Dimitriadis, Stavros I; Salis, Christos; Linden, David

    2018-04-01

    Limitations of the manual scoring of polysomnograms, which include data from electroencephalogram (EEG), electro-oculogram (EOG), electrocardiogram (ECG) and electromyogram (EMG) channels have long been recognized. Manual staging is resource intensive and time consuming, and thus considerable effort must be spent to ensure inter-rater reliability. As a result, there is a great interest in techniques based on signal processing and machine learning for a completely Automatic Sleep Stage Classification (ASSC). In this paper, we present a single-EEG-sensor ASSC technique based on the dynamic reconfiguration of different aspects of cross-frequency coupling (CFC) estimated between predefined frequency pairs over 5 s epoch lengths. The proposed analytic scheme is demonstrated using the PhysioNet Sleep European Data Format (EDF) Database with repeat recordings from 20 healthy young adults. We validate our methodology in a second sleep dataset. We achieved very high classification sensitivity, specificity and accuracy of 96.2 ± 2.2%, 94.2 ± 2.3%, and 94.4 ± 2.2% across 20 folds, respectively, and also a high mean F1 score (92%, range 90-94%) when a multi-class Naive Bayes classifier was applied. High classification performance has been achieved also in the second sleep dataset. Our method outperformed the accuracy of previous studies not only on different datasets but also on the same database. Single-sensor ASSC makes the entire methodology appropriate for longitudinal monitoring using wearable EEG in real-world and laboratory-oriented environments. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  14. LOW COST EMBEDDED STEREO SYSTEM FOR UNDERWATER SURVEYS

    Directory of Open Access Journals (Sweden)

    M. M. Nawaf

    2017-11-01

    Full Text Available This paper provides details of both hardware and software conception and realization of a hand-held stereo embedded system for underwater imaging. The designed system can run most image processing techniques smoothly in real-time. The developed functions provide direct visual feedback on the quality of the taken images which helps taking appropriate actions accordingly in terms of movement speed and lighting conditions. The proposed functionalities can be easily customized or upgraded whereas new functions can be easily added thanks to the available supported libraries. Furthermore, by connecting the designed system to a more powerful computer, a real-time visual odometry can run on the captured images to have live navigation and site coverage map. We use a visual odometry method adapted to low computational resources systems and long autonomy. The system is tested in a real context and showed its robustness and promising further perspectives.

  15. Variational stereo imaging of oceanic waves with statistical constraints.

    Science.gov (United States)

    Gallego, Guillermo; Yezzi, Anthony; Fedele, Francesco; Benetazzo, Alvise

    2013-11-01

    An image processing observational technique for the stereoscopic reconstruction of the waveform of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi-Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired waveform is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained by combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.

  16. Low Cost Embedded Stereo System for Underwater Surveys

    Science.gov (United States)

    Nawaf, M. M.; Boï, J.-M.; Merad, D.; Royer, J.-P.; Drap, P.

    2017-11-01

    This paper provides details of both hardware and software conception and realization of a hand-held stereo embedded system for underwater imaging. The designed system can run most image processing techniques smoothly in real-time. The developed functions provide direct visual feedback on the quality of the taken images which helps taking appropriate actions accordingly in terms of movement speed and lighting conditions. The proposed functionalities can be easily customized or upgraded whereas new functions can be easily added thanks to the available supported libraries. Furthermore, by connecting the designed system to a more powerful computer, a real-time visual odometry can run on the captured images to have live navigation and site coverage map. We use a visual odometry method adapted to low computational resources systems and long autonomy. The system is tested in a real context and showed its robustness and promising further perspectives.

  17. Daily-repeat stereo monitoring from formation flying

    Science.gov (United States)

    Wu, An-Ming

    2013-01-01

    Three satellites in formation flying have the flexibility to monitor the target on the ground and the vicinity in the space, and can even achieve stereo view for any object. We consider each satellite orbit is slightly different with a daily-repeat circular Sun-synchronous orbit in the inclination, the right ascension of ascending node, the argument of perigee, and the mean anomaly. According to the linearized orbit equation, a formation of a tilted triangle with respect to the equatorial plane can be constructed. A Sun-synchronous formation is then obtained through a rotation. We investigate the maintenance cost through the evaluation of the delta-V for the triangle formation under perturbation by the PID autonomous control of the nonlinear equation of motion. With reference to the relative position with respect to the formation centroid, the formation configuration can be maintained with less delta-V.

  18. Critical factors in SEM 3D stereo microscopy

    DEFF Research Database (Denmark)

    Marinello, F.; Bariano, P.; Savio, E.

    2008-01-01

    This work addresses dimensional measurements performed with the scanning electron microscope (SEM) using 3D reconstruction of surface topography through stereo-photogrammetry. The paper presents both theoretical and experimental investigations, on the effects of instrumental variables...... factors are recognized: the first one is related to the measurement operation and the instrument set-up; the second concerns the quality of scanned images and represents the major criticality in the application of SEMs for 3D characterizations....... and measurement parameters on reconstruction accuracy. Investigations were performed on a novel sample, specifically developed and implemented for the tests. The description is based on the model function introduced by Piazzesi and adapted for eucentrically tilted stereopairs. Two main classes of influencing...

  19. Perceptual coding of stereo endoscopy video for minimally invasive surgery

    Science.gov (United States)

    Bartoli, Guido; Menegaz, Gloria; Yang, Guang Zhong

    2007-03-01

    In this paper, we propose a compression scheme that is tailored for stereo-laparoscope sequences. The inter-frame correlation is modeled by the deformation field obtained by elastic registration between two subsequent frames and exploited for prediction of the left sequence. The right sequence is lossy encoded by prediction from the corresponding left images. Wavelet-based coding is applied to both the deformation vector fields and residual images. The resulting system supports spatio temporal scalability, while providing lossless performance. The implementation of the wavelet transform by integer lifting ensures a low computational complexity, thus reducing the required run-time memory allocation and on line implementation. Extensive psychovisual tests were performed for system validation and characterization with respect to the MPEG4 standard for video coding. Results are very encouraging: the PSVC system features the functionalities making it suitable for PACS while providing a good trade-off between usability and performance in lossy mode.

  20. Micropump and venous valve by micro stereo lithography

    Science.gov (United States)

    Varadan, Vijay K.; Varadan, Vasundara V.

    2000-06-01

    Micro Stereo Lithography (MSL) is a poor man's LIGA for fabricating high aspect ratio MEMS devices in UV curable semiconducting polymers using either two computer-controlled low inertia galvanometric mirrors with the aid of focusing lens or an array of optical fibers. For 3D MEMS devices, the polymers need to have conductive and possibly piezoelectric or ferroelectric properties. Such polymers are being developed at Penn State resulting in microdevices for fluid and drug delivery. Applications may include implanted medical delivery system, artificial heart valves, chemical and biological instruments, fluid delivery in engines, pump coolants and refrigerants for local cooling of electronic components. With the invention of organic thin film transistor, now it is possible to fabricate 3D polymeric MEMS devices with built-in-electronics similar to silicon based microelectronics.

  1. Deformation analysis of a sinkhole in Thuringia using multi-temporal multi-view stereo 3D reconstruction data

    Science.gov (United States)

    Petschko, Helene; Goetz, Jason; Schmidt, Sven

    2017-04-01

    Sinkholes are a serious threat on life, personal property and infrastructure in large parts of Thuringia. Over 9000 sinkholes have been documented by the Geological Survey of Thuringia, which are caused by collapsing hollows which formed due to solution processes within the local bedrock material. However, little is known about surface processes and their dynamics at the flanks of the sinkhole once the sinkhole has shaped. These processes are of high interest as they might lead to dangerous situations at or within the vicinity of the sinkhole. Our objective was the analysis of these deformations over time in 3D by applying terrestrial photogrammetry with a simple DSLR camera. Within this study, we performed an analysis of deformations within a sinkhole close to Bad Frankenhausen (Thuringia) using terrestrial photogrammetry and multi-view stereo 3D reconstruction to obtain a 3D point cloud describing the morphology of the sinkhole. This was performed for multiple data collection campaigns over a 6-month period. The photos of the sinkhole were taken with a Nikon D3000 SLR Camera. For the comparison of the point clouds the Multiscale Model to Model Comparison (M3C2) plugin of the software CloudCompare was used. It allows to apply advanced methods of point cloud difference calculation which considers the co-registration error between two point clouds for assessing the significance of the calculated difference (given in meters). Three Styrofoam cuboids of known dimensions (16 cm wide/29 cm high/11.5 cm deep) were placed within the sinkhole to test the accuracy of the point cloud difference calculation. The multi-view stereo 3D reconstruction was performed with Agisoft Photoscan. Preliminary analysis indicates that about 26% of the sinkhole showed changes exceeding the co-registration error of the point clouds. The areas of change can mainly be detected on the flanks of the sinkhole and on an earth pillar that formed in the center of the sinkhole. These changes describe

  2. Camera-based speckle noise reduction for 3-D absolute shape measurements.

    Science.gov (United States)

    Zhang, Hao; Kuschmierz, Robert; Czarske, Jürgen; Fischer, Andreas

    2016-05-30

    Simultaneous position and velocity measurements enable absolute 3-D shape measurements of fast rotating objects for instance for monitoring the cutting process in a lathe. Laser Doppler distance sensors enable simultaneous position and velocity measurements with a single sensor head by evaluating the scattered light signals. The superposition of several speckles with equal Doppler frequency but random phase on the photo detector results in an increased velocity and shape uncertainty, however. In this paper, we present a novel image evaluation method that overcomes the uncertainty limitations due to the speckle effect. For this purpose, the scattered light is detected with a camera instead of single photo detectors. Thus, the Doppler frequency from each speckle can be evaluated separately and the velocity uncertainty decreases with the square root of the number of camera lines. A reduction of the velocity uncertainty by the order of one magnitude is verified by the numerical simulations and experimental results, respectively. As a result, the measurement uncertainty of the absolute shape is not limited by the speckle effect anymore.

  3. Robust surface reconstruction by design-guided SEM photometric stereo

    Science.gov (United States)

    Miyamoto, Atsushi; Matsuse, Hiroki; Koutaki, Gou

    2017-04-01

    We present a novel approach that addresses the blind reconstruction problem in scanning electron microscope (SEM) photometric stereo for complicated semiconductor patterns to be measured. In our previous work, we developed a bootstrapping de-shadowing and self-calibration (BDS) method, which automatically calibrates the parameter of the gradient measurement formulas and resolves shadowing errors for estimating an accurate three-dimensional (3D) shape and underlying shadowless images. Experimental results on 3D surface reconstruction demonstrated the significance of the BDS method for simple shapes, such as an isolated line pattern. However, we found that complicated shapes, such as line-and-space (L&S) and multilayered patterns, produce deformed and inaccurate measurement results. This problem is due to brightness fluctuations in the SEM images, which are mainly caused by the energy fluctuations of the primary electron beam, variations in the electronic expanse inside a specimen, and electrical charging of specimens. Despite these being essential difficulties encountered in SEM photometric stereo, it is difficult to model accurately all the complicated physical phenomena of electronic behavior. We improved the robustness of the surface reconstruction in order to deal with these practical difficulties with complicated shapes. Here, design data are useful clues as to the pattern layout and layer information of integrated semiconductors. We used the design data as a guide of the measured shape and incorporated a geometrical constraint term to evaluate the difference between the measured and designed shapes into the objective function of the BDS method. Because the true shape does not necessarily correspond to the designed one, we use an iterative scheme to develop proper guide patterns and a 3D surface that provides both a less distorted and more accurate 3D shape after convergence. Extensive experiments on real image data demonstrate the robustness and effectiveness

  4. Stereo-hologram in discrete depth of field (Conference Presentation)

    Science.gov (United States)

    Lee, Kwanghoon; Park, Min-Chul

    2017-05-01

    In holographic space, continuous object space can be divided as several discrete spaces satisfied each of same depth of field (DoF). In the environment of wearable device using holography, specially, this concept can be applied to macroscopy filed in contrast of the field of microscopy. Since the former has not need to high depth resolution because perceiving power of eye in human visual system, it can distinguish clearly among the objects in depth space, has lower than optical power of microscopic field. Therefore continuous but discrete depth of field (DDoF) for whole object space can present the number of planes included sampled space considered its DoF. Each DoF plane has to consider the occlusion among the object's areas in its region to show the occluded phenomenon inducing by the visual axis around the eye field of view. It makes natural scene in recognition process even though the combined discontinuous DoF regions are altered to the continuous object space. Thus DDoF pull out the advantages such as saving consuming time of the calculation process making the hologram and the reconstruction. This approach deals mainly the properties of several factors required in stereo hologram HMD such as stereoscopic DoF according to the convergence, least number of DDoFs planes in normal visual circumstance (within to 10,000mm), the efficiency of saving time for taking whole holographic process under the our method compared to the existing. Consequently this approach would be applied directly to the stereo-hologram HMD field to embody a real-time holographic imaging.

  5. Application of Stereo PIV on a Supersonic Parachute Model

    Science.gov (United States)

    Wernet, Mark P.; Locke, Randy J.; Wroblewski, Adam; Sengupta, Anita

    2009-01-01

    The Mars Science Laboratory (MSL) is the next step in NASA's Mars Exploration Program, currently scheduled for 2011. The spacecraft's descent into the Martian atmosphere will be slowed from Mach 2 to subsonic speeds via a large parachute system with final landing under propulsive control. A Disk-Band-Gap (DBG) parachute will be used on MSL similar to the designs that have been used on previous missions, however; the DBG parachute used by MSL will be larger (21.5 m) than in any of the previous missions due to the weight of the payload and landing site requirements. The MSL parachute will also deploy at higher Mach number (M 2) than previous parachutes, which can lead to instabilities in canopy performance. Both the increased size of the DBG above previous demonstrated configurations and deployment at higher Mach numbers add uncertainty to the deployment, structural integrity and performance of the parachute. In order to verify the performance of the DBG on MSL, experimental testing, including acquisition of Stereo Particle Imaging Velocimetry (PIV) measurements were required for validating CFD predictions of the parachute performance. A rigid model of the DBG parachute was tested in the 10x10 foot wind tunnel at GRC. Prior to the MSL tests, a PIV system had never been used in the 10x10 wind tunnel. In this paper we discuss some of the technical challenges overcome in implementing a Stereo PIV system with a 750x400 mm field-of-view in the 10x10 wind tunnel facility and results from the MSL hardshell canopy tests.

  6. A computer implementation of a theory of human stereo vision.

    Science.gov (United States)

    Grimson, W E

    1981-05-12

    Recently, Marr & Poggio (1979) presented a theory of human stereo vision. An implementation of that theory is presented, and consists of five steps. (i) The left and right images are each filtered with masks of four sizes that increase with eccentricity; the shape of these masks is given by delta 2G, the Laplacian of a Gaussian function. (ii) Zero crossings in the filtered images are found along horizontal scan lines. (iii) For each mask size, matching takes place between zero crossings of the same sign and roughly the same orientation in the two images, for a range of disparities up to about the width of the mask's central region. Within this disparity range, it can be shown that false targets pose only a simple problem. (iv) The output of the wide masks can control vergence movements, thus causing small masks to come into correspondence. In this way, the matching process gradually moves from dealing with large disparities at a low resolution to dealing with small disparities at a high resolution. (v) When a correspondence is achieved, it is stored in a dynamic buffer, called the 2 1/2-dimensional sketch. To support the adequacy of the Marr-Poggio model of human stereo vision, the implementation was tested on a wide range of stereograms from the human stereopsis literature. The performance of the implementation is illustrated and compared with human perception. Also statistical assumptions made by Marr & Poggio are supported by comparison with statistics found in practice. Finally, the process of implementing the theory has led to the clarification and refinement of a number of details within the theory; these are discussed in detail.

  7. The Camera Comes to Court.

    Science.gov (United States)

    Floren, Leola

    After the Lindbergh kidnapping trial in 1935, the American Bar Association sought to eliminate electronic equipment from courtroom proceedings. Eventually, all but two states adopted regulations applying that ban to some extent, and a 1965 Supreme Court decision encouraged the banning of television cameras at trials as well. Currently, some states…

  8. High-speed holographic camera

    International Nuclear Information System (INIS)

    Novaro, Marc

    The high-speed holographic camera is a disgnostic instrument using holography as an information storing support. It allows us to take 10 holograms, of an object, with exposures times of 1,5ns, separated in time by 1 or 2ns. In order to get these results easily, no mobile part is used in the set-up [fr

  9. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    Just like art historians have focused on e.g. composition or lighting, this dissertation takes a single stylistic parameter as its object of study: camera movement. Within film studies this localized avenue of middle-level research has become increasingly viable under the aegis of a perspective k...

  10. The LSST camera system overview

    Science.gov (United States)

    Gilmore, Kirk; Kahn, Steven; Nordby, Martin; Burke, David; O'Connor, Paul; Oliver, John; Radeka, Veljko; Schalk, Terry; Schindler, Rafe

    2006-06-01

    The LSST camera is a wide-field optical (0.35-1um) imager designed to provide a 3.5 degree FOV with better than 0.2 arcsecond sampling. The detector format will be a circular mosaic providing approximately 3.2 Gigapixels per image. The camera includes a filter mechanism and, shuttering capability. It is positioned in the middle of the telescope where cross-sectional area is constrained by optical vignetting and heat dissipation must be controlled to limit thermal gradients in the optical beam. The fast, f/1.2 beam will require tight tolerances on the focal plane mechanical assembly. The focal plane array operates at a temperature of approximately -100°C to achieve desired detector performance. The focal plane array is contained within an evacuated cryostat, which incorporates detector front-end electronics and thermal control. The cryostat lens serves as an entrance window and vacuum seal for the cryostat. Similarly, the camera body lens serves as an entrance window and gas seal for the camera housing, which is filled with a suitable gas to provide the operating environment for the shutter and filter change mechanisms. The filter carousel can accommodate 5 filters, each 75 cm in diameter, for rapid exchange without external intervention.

  11. Toy Cameras and Color Photographs.

    Science.gov (United States)

    Speight, Jerry

    1979-01-01

    The technique of using toy cameras for both black-and-white and color photography in the art class is described. The author suggests that expensive equipment can limit the growth of a beginning photographer by emphasizing technique and equipment instead of in-depth experience with composition fundamentals and ideas. (KC)

  12. Gamma camera with reflectivity mask

    International Nuclear Information System (INIS)

    Stout, K.J.

    1980-01-01

    In accordance with the present invention there is provided a radiographic camera comprising: a scintillator; a plurality of photodectors positioned to face said scintillator; a plurality of masked regions formed upon a face of said scintillator opposite said photdetectors and positioned coaxially with respective ones of said photodetectors for decreasing the amount of internal reflection of optical photons generated within said scintillator. (auth)

  13. Robotic Arm Camera on Mars with Lights On

    Science.gov (United States)

    2008-01-01

    This image is a composite view of NASA's Phoenix Mars Lander's Robotic Arm Camera (RAC) with its lights on, as seen by the lander's Surface Stereo Imager (SSI). This image combines images taken on the afternoon of Phoenix's 116th Martian day, or sol (September 22, 2008). The RAC is about 8 centimeters (3 inches) tall. The SSI took images of the RAC to test both the light-emitting diodes (LEDs) and cover function. Individual images were taken in three SSI filters that correspond to the red, green, and blue LEDs one at a time. When combined, it appears that all three sets of LEDs are on at the same time. This composite image is not true color. The streaks of color extending from the LEDs are an artifact from saturated exposure. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  14. A multi-view stereo based 3D hand-held scanning system using visual-inertial navigation and structured light

    Directory of Open Access Journals (Sweden)

    Ayaz Shirazi Muhammad

    2015-01-01

    Full Text Available This paper describes the implementation of a 3D handheld scanning system based on visual inertial pose estimation and structured light technique.3D scanning system is composed of stereo camera, inertial navigation system (INS and illumination projector to collect high resolution data for close range applications. The proposed algorithm for visual pose estimation is either based on feature matching or using accurate target object. The integration of INS enables the scanning system to provide the fast and reliable pose estimation supporting visual pose estimates. Block matching algorithm was used to render two view 3D reconstruction. For multiview 3D approach, rough registration and final alignment of point clouds using iterative closest point algorithm further improves the scanning accuracy. The proposed system is potentially advantageous for the generation of 3D models in bio-medical applications.

  15. Stereo photograph of atomic arrangement by circularly-polarized-light two-dimensional photoelectron spectroscopy

    CERN Document Server

    Daimon, H

    2003-01-01

    A stereo photograph of atomic arrangement was obtained for the first time. The stereo photograph was displayed directly on the screen of display-type spherical-mirror analyzer without any computer-aided conversion process. This stereo photography was realized taking advantage of the phenomenon of circular dichroism in photoelectron angular distribution due to the reversal of orbital angular momentum of photoelectrons. The azimuthal shifts of forward focusing peaks in a photoelectron angular distribution pattern taken with left and right helicity light in a special arrangement are the same as the parallaxes in a stereo view of atoms. Hence a stereoscopic recognition of three-dimensional atomic arrangement is possible, when the left eye and the right eye respectively view the two images obtained by left and right helicity light simultaneously.

  16. A study on the effect of different image centres on stereo triangulation accuracy

    CSIR Research Space (South Africa)

    De Villiers, J

    2015-11-01

    Full Text Available This paper evaluates the effect of mixing the distortion centre, principal point and arithmetic image centre on the distortion correction, focal length determination and resulting real-world stereo vision triangulation. A robotic arm is used...

  17. NAMMA TWO-DIMENSIONAL STEREO PROBE AND CLOUD PARTICLE IMAGER V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The NAMMA Two-Dimensional Stereo Probe and Cloud Particle Imager dataset consists of data from two probes used to measure the size, shape, and concentration of cloud...

  18. NAMMA TWO-DIMENSIONAL STEREO PROBE AND CLOUD PARTICLE IMAGER V1

    Data.gov (United States)

    National Aeronautics and Space Administration — This Cloud Microphysics dataset consists of data from two probes used to measure the size, shape, and concentration of cloud particles; the two-dimensional stereo...

  19. Using temporal seeding to constrain the disparity search range in stereo matching

    CSIR Research Space (South Africa)

    Ndhlovu, T

    2011-11-01

    Full Text Available In a stereo image sequence, finding feature correspondences is normally done for every frame without taking temporal information into account. Reusing previous computations can add valuable information. A temporal seeding technique is developed...

  20. MISR Level 2 FIRSTLOOK TOA/Cloud Stereo parameters V001

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the Level 2 FIRSTLOOK TOA/Cloud Stereo Product. It contains the Stereoscopically Derived winds, heights and cloud mask along with associated data, produced...

  1. MISR L2 TOA/Cloud Stereo Product subset for the ICARTT region V002

    Data.gov (United States)

    National Aeronautics and Space Administration — MISR Level 2 TOA/Cloud Stereo Product containing the Stereoscopically Derived Cloud Mask (SDCM), cloud winds, Reflecting Level Reference Altitude (RLRA), with...

  2. PHOENIX MARS SURFACE STEREO IMAGER 5 INCID OVER FLX SCI V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Surface Stereo Imager (SSI) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This SSI Imaging Science RDR...

  3. PHOENIX MARS SURFACE STEREO IMAGER 5 NORMAL OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Surface Stereo Imager (SSI) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This SSI Imaging Operations RDR...

  4. PHOENIX MARS SURFACE STEREO IMAGER 3 RADIOMETRIC SCI V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Surface Stereo Imager (SSI) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This SSI Imaging Science RDR...

  5. PHOENIX MARS SURFACE STEREO IMAGER 4 LINEARIZED OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Surface Stereo Imager (SSI) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This SSI Imaging Operations RDR...

  6. BOREAS RSS-08 SSA IFC-3 Digitized Stereo Imagery at the OBS, OA, and OJP Sites

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: The RSS08 team acquired stereo photography from the double-scaffold towers at the Southern Study Area (SSA), Old Black Spruce (OBS), Old Aspen (OA), and...

  7. MISR L2 TOA/Cloud Stereo Product subset for the SAMUM region V002

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the Level 2 TOA/Cloud Stereo Product. It contains the Stereoscopically Derived Cloud Mask (SDCM), cloud winds, Reflecting Level Reference Altitude (RLRA),...

  8. PHOENIX MARS SURFACE STEREO IMAGER 5 DISPARITY OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Surface Stereo Imager (SSI) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This SSI Imaging Operations RDR...

  9. PHOENIX MARS SURFACE STEREO IMAGER 5 XYZ OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Surface Stereo Imager (SSI) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This SSI Imaging Operations RDR...

  10. PHOENIX MARS SURFACE STEREO IMAGER 5 ROUGHNESS OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Surface Stereo Imager (SSI) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This SSI Imaging Operations RDR...

  11. PHOENIX MARS SURFACE STEREO IMAGER 2 EDR VERSION 1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Surface Stereo Imager (SSI) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This SSI Imaging Operations EDR...

  12. PHOENIX MARS SURFACE STEREO IMAGER 5 REACHABILITY OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Surface Stereo Imager (SSI) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This SSI Imaging Operations RDR...

  13. MISR L2 TOA/Cloud Stereo Product subset for the RICO region V002

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the Level 2 TOA/Cloud Stereo Product subset for the RICO region. It contains the Stereoscopically Derived Cloud Mask (SDCM), cloud winds, Reflecting Level...

  14. BOREAS RSS-08 SSA IFC-3 Digitized Stereo Imagery at the OBS, OA, and OJP Sites

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS08 team acquired stereo photography from the double-scaffold towers at the Southern Study Area (SSA), Old Black Spruce (OBS), Old Aspen (OA), and Old Jack...

  15. MISR L2 FIRSTLOOK TOA/Cloud Stereo Product subset for the ARCTAS region V001

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the Level 2 FIRSTLOOK TOA/Cloud Stereo Product subset for the ARCTAS region. It contains the Stereoscopically Derived winds, heights and cloud mask along...

  16. Neural disparity computation from IKONOS stereo imagery in the presence of occlusions

    Science.gov (United States)

    Binaghi, E.; Gallo, I.; Baraldi, A.; Gerhardinger, A.

    2006-09-01

    In computer vision, stereoscopic image analysis is a well-known technique capable of extracting the third (vertical) dimension. Starting from this knowledge, the Remote Sensing (RS) community has spent increasing efforts on the exploitation of Ikonos one-meter resolution stereo imagery for high accuracy 3D surface modelling and elevation data extraction. In previous works our team investigated the potential of neural adaptive learning to solve the correspondence problem in the presence of occlusions. In this paper we present an experimental evaluation of an improved version of the neural based stereo matching method when applied to Ikonos one-meter resolution stereo images affected by occlusion problems. Disparity maps generated with the proposed approach are compared with those obtained by an alternative stereo matching algorithm implemented in a (non-)commercial image processing software toolbox. To compare competing disparity maps, quality metrics recommended by the evaluation methodology proposed by Scharstein and Szelinski (2002, IJCV, 47, 7-42) are adopted.

  17. The wavelet transform and the suppression theory of binocular vision for stereo image compression

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, W.D. Jr [Argonne National Lab., IL (United States); Kenyon, R.V. [Illinois Univ., Chicago, IL (United States)

    1996-08-01

    In this paper a method for compression of stereo images. The proposed scheme is a frequency domain approach based on the suppression theory of binocular vision. By using the information in the frequency domain, complex disparity estimation techniques can be avoided. The wavelet transform is used to obtain a multiresolution analysis of the stereo pair by which the subbands convey the necessary frequency domain information.

  18. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  19. Graphic design of pinhole cameras

    Science.gov (United States)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  20. Estimation of Aboveground Biomass Using Manual Stereo Viewing of Digital Aerial Photographs in Tropical Seasonal Forest

    Directory of Open Access Journals (Sweden)

    Katsuto Shimizu

    2014-11-01

    Full Text Available The objectives of this study are to: (1 evaluate accuracy of tree height measurements of manual stereo viewing on a computer display using digital aerial photographs compared with airborne LiDAR height measurements; and (2 develop an empirical model to estimate stand-level aboveground biomass with variables derived from manual stereo viewing on the computer display in a Cambodian tropical seasonal forest. We evaluate observation error of tree height measured from the manual stereo viewing, based on field measurements. RMSEs of tree height measurement with manual stereo viewing and LiDAR were 1.96 m and 1.72 m, respectively. Then, stand-level aboveground biomass is regressed against tree height indices derived from the manual stereo viewing. We determined the best model to estimate aboveground biomass in terms of the Akaike’s information criterion. This was a model of mean tree height of the tallest five trees in each plot (R2 = 0.78; RMSE = 58.18 Mg/ha. In conclusion, manual stereo viewing on the computer display can measure tree height accurately and is useful to estimate aboveground stand biomass.

  1. The Use of Camera Traps in Wildlife

    OpenAIRE

    Yasin Uçarlı; Bülent Sağlam

    2013-01-01

    Camera traps are increasingly used in the abundance and density estimates of wildlife species. Camera traps are very good alternative for direct observation in case, particularly, steep terrain, dense vegetation covered areas or nocturnal species. The main reason for the use of camera traps is eliminated that the economic, personnel and time loss in a continuous manner at the same time in different points. Camera traps, motion and heat sensitive, can take a photo or video according to the mod...

  2. An Open Standard for Camera Trap Data

    NARCIS (Netherlands)

    Forrester, Tavis; O'Brien, Tim; Fegraus, Eric; Jansen, P.A.; Palmer, Jonathan; Kays, Roland; Ahumada, Jorge; Stern, Beth; McShea, William

    2016-01-01

    Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an

  3. A camera specification for tendering purposes

    International Nuclear Information System (INIS)

    Lunt, M.J.; Davies, M.D.; Kenyon, N.G.

    1985-01-01

    A standardized document is described which is suitable for sending to companies which are being invited to tender for the supply of a gamma camera. The document refers to various features of the camera, the performance specification of the camera, maintenance details, price quotations for various options and delivery, installation and warranty details. (U.K.)

  4. Imaging Asteroid 4 Vesta Using the Framing Camera

    Science.gov (United States)

    Keller, H. Uwe; Nathues, Andreas; Coradini, Angioletta; Jaumann, Ralf; Jorda, Laurent; Li, Jian-Yang; Mittlefehldt, David W.; Mottola, Stefano; Raymond, C. A.; Schroeder, Stefan E.

    2011-01-01

    The Framing Camera (FC) onboard the Dawn spacecraft serves a dual purpose. Next to its central role as a prime science instrument it is also used for the complex navigation of the ion drive spacecraft. The CCD detector with 1024 by 1024 pixels provides the stability for a multiyear mission and its high requirements of photometric accuracy over the wavelength band from 400 to 1000 nm covered by 7 band-pass filters. Vesta will be observed from 3 orbit stages with image scales of 227, 63, and 17 m/px, respectively. The mapping of Vesta s surface with medium resolution will be only completed during the exit phase when the north pole will be illuminated. A detailed pointing strategy will cover the surface at least twice at similar phase angles to provide stereo views for reconstruction of the topography. During approach the phase function of Vesta was determined over a range of angles not accessible from earth. This is the first step in deriving the photometric function of the surface. Combining the topography based on stereo tie points with the photometry in an iterative procedure will disclose details of the surface morphology at considerably smaller scales than the pixel scale. The 7 color filters are well positioned to provide information on the spectral slope in the visible, the depth of the strong pyroxene absorption band, and their variability over the surface. Cross calibration with the VIR spectrometer that extends into the near IR will provide detailed maps of Vesta s surface mineralogy and physical properties. Georeferencing all these observation will result in a coherent and unique data set. During Dawn s approach and capture FC has already demonstrated its performance. The strong variation observed by the Hubble Space Telescope can now be correlated with surface units and features. We will report on results obtained from images taken during survey mode covering the whole illuminated surface. Vesta is a planet-like differentiated body, but its surface

  5. Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs).

    Science.gov (United States)

    Jaramillo, Carlos; Valenti, Roberto G; Guo, Ling; Xiao, Jizhong

    2016-02-06

    We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor's projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.

  6. Design and Analysis of a Single—Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs

    Directory of Open Access Journals (Sweden)

    Carlos Jaramillo

    2016-02-01

    Full Text Available We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo vision system applied to Micro Aerial Vehicles (MAVs. The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration. We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads. The theoretical single viewpoint (SVP constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion. We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.

  7. Augmented-reality visualization of brain structures with stereo and kinetic depth cues: system description and initial evaluation with head phantom

    Science.gov (United States)

    Maurer, Calvin R., Jr.; Sauer, Frank; Hu, Bo; Bascle, Benedicte; Geiger, Bernhard; Wenzel, Fabian; Recchi, Filippo; Rohlfing, Torsten; Brown, Christopher R.; Bakos, Robert J.; Maciunas, Robert J.; Bani-Hashemi, Ali R.

    2001-05-01

    We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial

  8. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  9. A Statistical Study of Interplanetary Type II Bursts: STEREO Observations

    Science.gov (United States)

    Krupar, V.; Eastwood, J. P.; Magdalenic, J.; Gopalswamy, N.; Kruparova, O.; Szabo, A.

    2017-12-01

    Coronal mass ejections (CMEs) are the primary cause of the most severe and disruptive space weather events such as solar energetic particle (SEP) events and geomagnetic storms at Earth. Interplanetary type II bursts are generated via the plasma emission mechanism by energetic electrons accelerated at CME-driven shock waves and hence identify CMEs that potentially cause space weather impact. As CMEs propagate outward from the Sun, radio emissions are generated at progressively at lower frequencies corresponding to a decreasing ambient solar wind plasma density. We have performed a statistical study of 153 interplanetary type II bursts observed by the two STEREO spacecraft between March 2008 and August 2014. These events have been correlated with manually-identified CMEs contained in the Heliospheric Cataloguing, Analysis and Techniques Service (HELCATS) catalogue. Our results confirm that faster CMEs are more likely to produce interplanetary type II radio bursts. We have compared observed frequency drifts with white-light observations to estimate angular deviations of type II burst propagation directions from radial. We have found that interplanetary type II bursts preferably arise from CME flanks. Finally, we discuss a visibility of radio emissions in relation to the CME propagation direction.

  10. Depth perception of stereo overlays in image-guided surgery

    Science.gov (United States)

    Johnson, Laura; Edwards, Philip; Griffin, Lewis; Hawkes, David

    2004-05-01

    See-through augmented reality (AR) systems for image-guided surgery merge volume rendered MRI/CT data directly with the surgeon"s view of the patient during surgery. Research has so far focused on optimizing the technique of aligning and registering the computer-generated anatomical images with the patient"s anatomy during surgery. We have previously developed a registration and calibration method that allows alignment of the virtual and real anatomy to ~1mm accuracy. Recently we have been investigating the accuracy with which observers can interpret the combined visual information presented with an optical see-through AR system. We found that depth perception of a virtual image presented in stereo below a physical surface was misperceived compared to viewing the target in the absence of a surface. Observers overestimated depth for a target 0-2cm below the surface and underestimated the depth for all other presentation depths. The perceptual error could be reduced, but not eliminated, when a virtual rendering of the physical surface was displayed simultaneously with the virtual image. The findings suggest that misperception is due either to accommodation conflict between the physical surface and the projected AR image, or the lack of correct occlusion between the virtual and real surfaces.

  11. Bubble behavior characteristics based on virtual binocular stereo vision

    Science.gov (United States)

    Xue, Ting; Xu, Ling-shuang; Zhang, Shang-zhen

    2018-01-01

    The three-dimensional (3D) behavior characteristics of bubble rising in gas-liquid two-phase flow are of great importance to study bubbly flow mechanism and guide engineering practice. Based on the dual-perspective imaging of virtual binocular stereo vision, the 3D behavior characteristics of bubbles in gas-liquid two-phase flow are studied in detail, which effectively increases the projection information of bubbles to acquire more accurate behavior features. In this paper, the variations of bubble equivalent diameter, volume, velocity and trajectory in the rising process are estimated, and the factors affecting bubble behavior characteristics are analyzed. It is shown that the method is real-time and valid, the equivalent diameter of the rising bubble in the stagnant water is periodically changed, and the crests and troughs in the equivalent diameter curve appear alternately. The bubble behavior characteristics as well as the spiral amplitude are affected by the orifice diameter and the gas volume flow.

  12. A stereo vision-based obstacle detection system in vehicles

    Science.gov (United States)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  13. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    Energy Technology Data Exchange (ETDEWEB)

    M. D. McKay; M. O. Anderson; R. A. Kinoshita; W. D. Willis

    1999-02-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an ongoing effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the ''feel'' of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  14. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Kinoshita, Robert Arthur; Anderson, Matthew Oley; Mckay, Mark D; Willis, Walter David

    1999-04-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an on going effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the "feel" of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  15. An Omnidirectional Stereo Vision-Based Smart Wheelchair

    Directory of Open Access Journals (Sweden)

    Yutaka Satoh

    2007-06-01

    Full Text Available To support safe self-movement of the disabled and the aged, we developed an electric wheelchair that realizes the functions of detecting both the potential hazards in a moving environment and the postures and gestures of a user by equipping an electric wheelchair with the stereo omnidirectional system (SOS, which is capable of acquiring omnidirectional color image sequences and range data simultaneously in real time. The first half of this paper introduces the SOS and the basic technology behind it. To use the multicamera system SOS on an electric wheelchair, we developed an image synthesizing method of high speed and high quality and the method of recovering SOS attitude changes by using attitude sensors is also introduced. This method allows the SOS to be used without being affected by the mounting attitude of the SOS. The second half of this paper introduces the prototype electric wheelchair actually manufactured and experiments conducted using the prototype. The usability of the electric wheelchair is also discussed.

  16. Accurate and occlusion-robust multi-view stereo

    Science.gov (United States)

    Zhu, Zhaokun; Stamatopoulos, Christos; Fraser, Clive S.

    2015-11-01

    This paper proposes an accurate multi-view stereo method for image-based 3D reconstruction that features robustness in the presence of occlusions. The new method offers improvements in dealing with two fundamental image matching problems. The first concerns the selection of the support window model, while the second centers upon accurate visibility estimation for each pixel. The support window model is based on an approximate 3D support plane described by a depth and two per-pixel depth offsets. For the visibility estimation, the multi-view constraint is initially relaxed by generating separate support plane maps for each support image using a modified PatchMatch algorithm. Then the most likely visible support image, which represents the minimum visibility of each pixel, is extracted via a discrete Markov Random Field model and it is further augmented by parameter clustering. Once the visibility is estimated, multi-view optimization taking into account all redundant observations is conducted to achieve optimal accuracy in the 3D surface generation for both depth and surface normal estimates. Finally, multi-view consistency is utilized to eliminate any remaining observational outliers. The proposed method is experimentally evaluated using well-known Middlebury datasets, and results obtained demonstrate that it is amongst the most accurate of the methods thus far reported via the Middlebury MVS website. Moreover, the new method exhibits a high completeness rate.

  17. Stereo-Based Visual Odometry for Autonomous Robot Navigation

    Directory of Open Access Journals (Sweden)

    Ioannis Kostavelis

    2016-02-01

    Full Text Available Mobile robots should possess accurate self-localization capabilities in order to be successfully deployed in their environment. A solution to this challenge may be derived from visual odometry (VO, which is responsible for estimating the robot's pose by analysing a sequence of images. The present paper proposes an accurate, computationally-efficient VO algorithm relying solely on stereo vision images as inputs. The contribution of this work is twofold. Firstly, it suggests a non-iterative outlier detection technique capable of efficiently discarding the outliers of matched features. Secondly, it introduces a hierarchical motion estimation approach that produces refinements to the global position and orientation for each successive step. Moreover, for each subordinate module of the proposed VO algorithm, custom non-iterative solutions have been adopted. The accuracy of the proposed system has been evaluated and compared with competent VO methods along DGPS-assessed benchmark routes. Experimental results of relevance to rough terrain routes, including both simulated and real outdoors data, exhibit remarkable accuracy, with positioning errors lower than 2%.

  18. Bifurcations in two-image photometric stereo for orthogonal illuminations

    Science.gov (United States)

    Kozera, R.; Prokopenya, A.; Noakes, L.; Śluzek, A.

    2017-07-01

    This paper discusses the ambiguous shape recovery in two-image photometric stereo for a Lambertian surface. The current uniqueness analysis refers to linearly independent light-source directions p = (0, 0, -1) and q arbitrary. For this case necessary and sufficient condition determining ambiguous reconstruction is governed by a second-order linear partial differential equation with constant coefficients. In contrast, a general position of both non-colinear illumination directions p and q leads to a highly non-linear PDE which raises a number of technical difficulties. As recently shown, the latter can also be handled for another family of orthogonal illuminations parallel to the OXZ-plane. For the special case of p = (0, 0, -1) a potential ambiguity stems also from the possible bifurcations of sub-local solutions glued together along a curve defined by an algebraic equation in terms of the data. This paper discusses the occurrence of similar bifurcations for such configurations of orthogonal light-source directions. The discussion to follow is supplemented with examples based on continuous reflectance map model and generated synthetic images.

  19. Multi-camera digital image correlation method with distributed fields of view

    Science.gov (United States)

    Malowany, Krzysztof; Malesa, Marcin; Kowaluk, Tomasz; Kujawinska, Malgorzata

    2017-11-01

    A multi-camera digital image correlation (DIC) method and system for measurements of large engineering objects with distributed, non-overlapping areas of interest are described. The data obtained with individual 3D DIC systems are stitched by an algorithm which utilizes the positions of fiducial markers determined simultaneously by Stereo-DIC units and laser tracker. The proposed calibration method enables reliable determination of transformations between local (3D DIC) and global coordinate systems. The applicability of the method was proven during in-situ measurements of a hall made of arch-shaped (18 m span) self-supporting metal-plates. The proposed method is highly recommended for 3D measurements of shape and displacements of large and complex engineering objects made from multiple directions and it provides the suitable accuracy of data for further advanced structural integrity analysis of such objects.

  20. Modelling Virtual Camera Behaviour Through Player Gaze

    DEFF Research Database (Denmark)

    Picardi, Andrea; Burelli, Paolo; Yannakakis, Georgios N.

    2012-01-01

    In a three-dimensional virtual environment, aspects such as narrative and interaction largely depend on the placement and animation of the virtual camera. Therefore, virtual camera control plays a critical role in player experience and, thereby, in the overall quality of a computer game. Both game...... on the relationship between virtual camera, game-play and player behaviour. We run a game user experiment to shed some light on this relationship and identify relevant dif- ferences between camera behaviours through different game sessions, playing behaviours and player gaze patterns. Re- sults show that users can...... be efficiently profiled in dissimilar clusters according to camera control as part of their game- play behaviour....

  1. Adapting virtual camera behaviour through player modelling

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Research in virtual camera control has focused primarily on finding methods to allow designers to place cameras effectively and efficiently in dynamic and unpredictable environments, and to generate complex and dynamic plans for cinematography in virtual environments. In this article, we propose...... a novel approach to virtual camera control, which builds upon camera control and player modelling to provide the user with an adaptive point-of-view. To achieve this goal, we propose a methodology to model the player’s preferences on virtual camera movements and we employ the resulting models to tailor...

  2. Initial laboratory evaluation of color video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P L

    1991-01-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than identify an intruder. Monochrome cameras are adequate for that application and were selected over color cameras because of their greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Color information is useful for identification purposes, and color camera technology is rapidly changing. Thus, Sandia National Laboratories established an ongoing program to evaluate color solid-state cameras. Phase one resulted in the publishing of a report titled, Initial Laboratory Evaluation of Color Video Cameras (SAND--91-2579).'' It gave a brief discussion of imager chips and color cameras and monitors, described the camera selection, detailed traditional test parameters and procedures, and gave the results of the evaluation of twelve cameras. In phase two six additional cameras were tested by the traditional methods and all eighteen cameras were tested by newly developed methods. This report details both the traditional and newly developed test parameters and procedures, and gives the results of both evaluations.

  3. Comparison of different "along the track" high resolution satellite stereo-pair for DSM extraction

    Science.gov (United States)

    Nikolakopoulos, Konstantinos G.

    2013-10-01

    The possibility to create DEM from stereo pairs is based on the Pythagoras theorem and on the principles of photogrammetry that are applied to aerial photographs stereo pairs for the last seventy years. The application of these principles to digital satellite stereo data was inherent in the first satellite missions. During the last decades the satellite stereo-pairs were acquired across the track in different days (SPOT, ERS etc.). More recently the same-date along the track stereo-data acquisition seems to prevail (Terra ASTER, SPOT5 HRS, Cartosat, ALOS Prism) as it reduces the radiometric image variations (refractive effects, sun illumination, temporal changes) and thus increases the correlation success rate in any image matching.Two of the newest satellite sensors with stereo collection capability is Cartosat and ALOS Prism. Both of them acquire stereopairs along the track with a 2,5m spatial resolution covering areas of 30X30km. In this study we compare two different satellite stereo-pair collected along the track for DSM creation. The first one is created from a Cartosat stereopair and the second one from an ALOS PRISM triplet. The area of study is situated in Chalkidiki Peninsula, Greece. Both DEMs were created using the same ground control points collected with a Differential GPS. After a first control for random or systematic errors a statistical analysis was done. Points of certified elevation have been used to estimate the accuracy of these two DSMs. The elevation difference between the different DEMs was calculated. 2D RMSE, correlation and the percentile value were also computed and the results are presented.

  4. Stereo visualization in the ground segment tasks of the science space missions

    Science.gov (United States)

    Korneva, Natalia; Nazarov, Vladimir; Mogilevsky, Mikhail; Nazirov, Ravil

    The ground segment is one of the key components of any science space mission. Its functionality substantially defines the scientific effectiveness of the experiment as a whole. And it should be noted that its outstanding feature (in contrast to the other information systems of the scientific space projects) is interaction between researcher and project information system in order to interpret data being obtained during experiments. Therefore the ability to visualize the data being processed is essential prerequisite for ground segment's software and the usage of modern technological solutions and approaches in this area will allow increasing science return in general and providing a framework for new experiments creation. Mostly for the visualization of data being processed 2D and 3D graphics are used that is caused by the traditional visualization tools capabilities. Besides that the stereo data visualization methods are used actively in solving some tasks. However their usage is usually limited to such tasks as visualization of virtual and augmented reality, remote sensing data processing and suchlike. Low prevalence of stereo visualization methods in solving science ground segment tasks is primarily explained by extremely high cost of the necessary hardware. But recently appeared low cost hardware solutions for stereo visualization based on the page-flip method of views separation. In this case it seems promising to use the stereo visualization as an instrument for investigation of a wide range of problems, mainly for stereo visualization of complex physical processes as well as mathematical abstractions and models. The article is concerned with an attempt to use this approach. It describes the details and problems of using stereo visualization (page-flip method based on NVIDIA 3D Vision Kit, graphic processor GeForce) for display of some datasets of magnetospheric satellite onboard measurements and also in development of the software for manual stereo matching.

  5. Sleep in the human hippocampus: a stereo-EEG study.

    Directory of Open Access Journals (Sweden)

    Fabio Moroni

    Full Text Available BACKGROUND: There is compelling evidence indicating that sleep plays a crucial role in the consolidation of new declarative, hippocampus-dependent memories. Given the increasing interest in the spatiotemporal relationships between cortical and hippocampal activity during sleep, this study aimed to shed more light on the basic features of human sleep in the hippocampus. METHODOLOGY/PRINCIPAL FINDINGS: We recorded intracerebral stereo-EEG directly from the hippocampus and neocortical sites in five epileptic patients undergoing presurgical evaluations. The time course of classical EEG frequency bands during the first three NREM-REM sleep cycles of the night was evaluated. We found that delta power shows, also in the hippocampus, the progressive decrease across sleep cycles, indicating that a form of homeostatic regulation of delta activity is present also in this subcortical structure. Hippocampal sleep was also characterized by: i a lower relative power in the slow oscillation range during NREM sleep compared to the scalp EEG; ii a flattening of the time course of the very low frequencies (up to 1 Hz across sleep cycles, with relatively high levels of power even during REM sleep; iii a decrease of power in the beta band during REM sleep, at odds with the typical increase of power in the cortical recordings. CONCLUSIONS/SIGNIFICANCE: Our data imply that cortical slow oscillation is attenuated in the hippocampal structures during NREM sleep. The most peculiar feature of hippocampal sleep is the increased synchronization of the EEG rhythms during REM periods. This state of resonance may have a supportive role for the processing/consolidation of memory.

  6. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  7. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  8. Using DSLR cameras in digital holography

    Science.gov (United States)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  9. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  10. Optimising camera traps for monitoring small mammals.

    Science.gov (United States)

    Glen, Alistair S; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  11. Stereo Scene Flow for 3D Motion Analysis

    CERN Document Server

    Wedel, Andreas

    2011-01-01

    This book presents methods for estimating optical flow and scene flow motion with high accuracy, focusing on the practical application of these methods in camera-based driver assistance systems. Clearly and logically structured, the book builds from basic themes to more advanced concepts, culminating in the development of a novel, accurate and robust optic flow method. Features: reviews the major advances in motion estimation and motion analysis, and the latest progress of dense optical flow algorithms; investigates the use of residual images for optical flow; examines methods for deriving mot

  12. STEREO/SEPT observations of upstream particle events: almost monoenergetic ion beams

    Directory of Open Access Journals (Sweden)

    A. Klassen

    2009-05-01

    Full Text Available We present observations of Almost Monoenergetic Ion (AMI events in the energy range of 100–1200 keV detected with the Solar Electron and Proton Telescope (SEPT onboard both STEREO spacecraft. The energy spectrum of AMI events contain 1, 2, or 3 narrow peaks with the relative width at half maximum of 0.1–0.7 and their energy maxima varies for different events from 120 to 1200 keV. These events were detected close to the bow-shock (STEREO-A&B and to the magnetopause at STEREO-B as well as unexpectedly far upstream of the bow-shock and far away from the magnetotail at distances up to 1100 RE (STEREO-B and 1900 RE (STEREO-A. We discuss the origin of AMI events, the connection to the Earth's bow-shock and to the magnetosphere, and the conditions of the interplanetary medium and magnetosphere under which these AMI bursts occur. Evidence that the detected spectral peaks were caused by quasi-monoenergetic beams of protons, helium, and heavier ions are given. Furthermore, we present the spatial distribution of all AMI events from December 2006 until August 2007.

  13. A method of camera calibration in the measurement process with reference mark for approaching observation space target

    Science.gov (United States)

    Zhang, Hua; Zeng, Luan

    2017-11-01

    Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.

  14. Tridimensional Reconstruction Applied to Cultural Heritage with the Use of Camera-Equipped UAV and Terrestrial Laser Scanner

    Directory of Open Access Journals (Sweden)

    Zhihua Xu

    2014-10-01

    Full Text Available No single sensor can acquire complete information by applying one or several multi-surveys to cultural object reconstruction. For instance, a terrestrial laser scanner (TLS usually obtains information on building facades, whereas aerial photogrammetry is capable of providing the perspective for building roofs. In this study, a camera-equipped unmanned aerial vehicle system (UAV and a TLS were used in an integrated design to capture 3D point clouds and thus facilitate the acquisition of whole information on an object of interest for cultural heritage. A camera network is proposed to modify the image-based 3D reconstruction or structure from motion (SfM method by taking full advantage of the flight control data acquired by the UAV platform. The camera network improves SfM performances in terms of image matching efficiency and the reduction of mismatches. Thus, this camera network modified SfM is employed to process the overlapping UAV image sets and to recover the scene geometry. The SfM output covers most information on building roofs, but has sparse resolution. The dense multi-view 3D reconstruction algorithm is then applied to improve in-depth detail. The two groups of point clouds from image reconstruction and TLS scanning are registered from coarse to fine with the use of an iterative method. This methodology has been tested on one historical monument in Fujian Province, China. Results show a final point cloud with complete coverage and in-depth details. Moreover, findings demonstrate that these two platforms, which integrate the scanning principle and image reconstruction methods, can supplement each other in terms of coverage, sensing resolution, and model accuracy to create high-quality 3D recordings and presentations.

  15. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  16. Approximations to camera sensor noise

    Science.gov (United States)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  17. The Application of Stereo Anti-eccentrically Teaching Methods in Traditional Chinese Qigong Course

    Directory of Open Access Journals (Sweden)

    Ai-Dong JI

    2014-04-01

    Full Text Available Objective: To explore the use of stereo anti-eccentrically Chinese qigong teaching mode. Method: To introduce the concrete steps of Qigong teaching methods from Qigong serious consequences,deeviation and the present teaching situation, the goal and significance of stereo anti-eccentrically qigong teaching methods etc. Result: In 2013, 6 classes and 498 students, only 23(4.61%)students appeared discomfort. Within a week the improvement of discomfort students is 16(69.56%),disappearance of discomfort students is 5(21.73%), two weeks later,disappeared discomfort students achieve 100%, have no  emergence of severe deviation, reached the safety of teaching purpose. Conclusion: Stereo anti-eccentrically Qigong teaching method can find the first hint of deviation and correct them in time, can improve the security of teaching. so it is worth spreading.

  18. An assembly system based on industrial robot with binocular stereo vision

    Science.gov (United States)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  19. A Multi-Model Stereo Similarity Function Based on Monogenic Signal Analysis in Poisson Scale Space

    Directory of Open Access Journals (Sweden)

    Jinjun Li

    2011-01-01

    Full Text Available A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.

  20. Research situation and development trend of the binocular stereo vision system

    Science.gov (United States)

    Wang, Tonghao; Liu, Bingqi; Wang, Ying; Chen, Yichao

    2017-05-01

    Since the 21st century, with the development of the computer and signal processing technology, a new comprehensive subject that called computer vision was generated. Computer vision covers a wide range of knowledge, which includes physics, mathematics, biology, computer technology and other arts subjects. It contains much content, and becomes more and more powerful, not only can realize the function of the human eye "see", also can realize the human eyes cannot. In recent years, binocular stereo vision which is a main branch of the computer vision has become the focus of the research in the field of the computer vision. In this paper, the binocular stereo vision system, the development of present situation and application at home and abroad are summarized. With the current problems of the binocular stereo vision system, his own opinions are given. Furthermore, a prospective view of the future application and development of this technology are prospected.

  1. Approaching solar maximum 24 with STEREO--Multipoint observations of solar energetic particle events

    Energy Technology Data Exchange (ETDEWEB)

    Dresing, N.; Heber, B.; Klassen, A., E-mail: dresing@physik.uni-kiel.de [IEAP, University of Kiel, Kiel (Germany); Cohen, C.M.S.; Leske, R.A.; Mewaldt, R.A. [California Institute of Technology, Pasadena, CA (United States); Gomez-Herrero, R. [Space Research Group, University of Alcal´a, Alcal´a (Spain); Mason, G.M. [Applied Physics Laboratory, Johns Hopkins University, Laurel, MD (United States); Von Rosenvinge, T.T. [NASA Goddard Space Flight Center, Greenbelt, MD (United States)

    2014-07-01

    Since the beginning of the Solar Terrestrial Relations Observatory (STEREO) mission at the end of 2006, the two spacecraft have now separated by more than 130◦ degrees from the Earth. A 360-degree view of the Sun has been possible since February 2011, providing multipoint in situ and remote sensing observations of unprecedented quality. Combining STEREO observations with near-Earth measurements allows the study of solar energetic particle (SEP) events over a wide longitudinal range with minimal radial gradient effects. This contribution provides an overview of recent results obtained by the STEREO/IMPACT team in combination with observations by the ACE and SOHO spacecraft. We focus especially on multi-spacecraft investigations of SEP events. The large longitudinal spread of electron and 3He-rich events as well as unusual anisotropies will be presented and discussed. (author)

  2. Streak camera recording of interferometer fringes

    International Nuclear Information System (INIS)

    Parker, N.L.; Chau, H.H.

    1977-01-01

    The use of an electronic high-speed camera in the streaking mode to record interference fringe motion from a velocity interferometer is discussed. Advantages of this method over the photomultiplier tube-oscilloscope approach are delineated. Performance testing and data for the electronic streak camera are discussed. The velocity profile of a mylar flyer accelerated by an electrically exploded bridge, and the jump-off velocity of metal targets struck by these mylar flyers are measured in the camera tests. Advantages of the streak camera include portability, low cost, ease of operation and maintenance, simplified interferometer optics, and rapid data analysis

  3. Decision about buying a gamma camera

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera

  4. Benchmarking of depth of field for large out-of-plane deformations with single camera digital image correlation

    Science.gov (United States)

    Van Mieghem, Bart; Ivens, Jan; Van Bael, Albert

    2017-04-01

    A problem that arises when performing stereo digital image correlation in applications with large out-of-plane displacements is that the images may become unfocused. This unfocusing could result in correlation instabilities or inaccuracies. When performing DIC measurements and expecting large out-of-plane displacements researchers either trust on their experience or use the equations from photography to estimate the parameters affecting the depth of field (DOF) of the camera. A limitation of the latter approach is that the definition of sharpness is a human defined parameter and that it does not reflect the performance of the digital image correlation system. To get a more representative DOF value for DIC applications, a standardised testing method is presented here, making use of real camera and lens combinations as well as actual image correlation results. The method is based on experimental single camera DIC measurements of a backwards moving target. Correlation results from focused and unfocused images are compared and a threshold value defines whether or not the correlation results are acceptable even if the images are (slightly) unfocused. By following the proposed approach, the complete DOF of a specific camera/lens combination as function of the aperture setting and distance from the camera to the target can be defined. The comparison between the theoretical and the experimental DOF results shows that the achievable DOF for DIC applications is larger than what theoretical calculations predict. Practically this means that the cameras can be positioned closer to the target than what is expected from the theoretical approach. This leads to a gain in resolution and measurement accuracy.

  5. Uncertainty evaluation for three-dimensional scanning electron microscope reconstructions based on the stereo-pair technique

    DEFF Research Database (Denmark)

    Carli, Lorenzo; Genta, G; Cantatore, Angela

    2011-01-01

    3D-SEM is a method, based on the stereophotogrammetry technique, which obtains three-dimensional topographic reconstructions starting typically from two SEM images, called the stereo-pair. In this work, a theoretical uncertainty evaluation of the stereo-pair technique, according to GUM (Guide to ...

  6. Sensor Fusion - Sonar and Stereo Vision, Using Occupancy Grids and SIFT

    DEFF Research Database (Denmark)

    Plascencia, Alfredo; Bendtsen, Jan Dimon

    2006-01-01

    The main contribution of this paper is to present a sensor fusion approach to scene environment mapping as part of a SDF (Sensor Data Fusion) architecture. This approach involves combined sonar and stereo vision readings. Sonar readings are interpreted using probability density functions to the o......The main contribution of this paper is to present a sensor fusion approach to scene environment mapping as part of a SDF (Sensor Data Fusion) architecture. This approach involves combined sonar and stereo vision readings. Sonar readings are interpreted using probability density functions...

  7. Microgeometry capture and RGB albedo estimation by photometric stereo without demosaicing

    Science.gov (United States)

    Quéau, Yvain; Pizenberg, Mathieu; Durou, Jean-Denis; Cremers, Daniel

    2017-03-01

    We present a photometric stereo-based system for retrieving the RGB albedo and the fine-scale details of an opaque surface. In order to limit specularities, the system uses a controllable diffuse illumination, which is calibrated using a dedicated procedure. In addition, we rather handle RAW, non-demosaiced RGB images, which both avoids uncontrolled operations on the sensor data and simplifies the estimation of the albedo in each color channel and of the normals. We finally show on real-world examples the potential of photometric stereo for the 3D-reconstruction of very thin structures from a wide variety of surfaces.

  8. Cloud Height Estimation with a Single Digital Camera and Artificial Neural Networks

    Science.gov (United States)

    Carretas, Filipe; Janeiro, Fernando M.

    2014-05-01

    Clouds influence the local weather, the global climate and are an important parameter in the weather prediction models. Clouds are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Therefore it is important to develop low cost and robust systems that can be easily deployed in the field, enabling large scale acquisition of cloud parameters. Recently, the authors developed a low-cost system for the measurement of cloud base height using stereo-vision and digital photography. However, due to the stereo nature of the system, some challenges were presented. In particular, the relative camera orientation requires calibration and the two cameras need to be synchronized so that the photos from both cameras are acquired simultaneously. In this work we present a new system that estimates the cloud height between 1000 and 5000 meters. This prototype is composed by one digital camera controlled by a Raspberry Pi and is installed at Centro de Geofísica de Évora (CGE) in Évora, Portugal. The camera is periodically triggered to acquire images of the overhead sky and the photos are downloaded to the Raspberry Pi which forwards them to a central computer that processes the images and estimates the cloud height in real time. To estimate the cloud height using just one image requires a computer model that is able to learn from previous experiences and execute pattern recognition. The model proposed in this work is an Artificial Neural Network (ANN) that was previously trained with cloud features at different heights. The chosen Artificial Neural Network is a three-layer network, with six parameters in the input layer, 12 neurons in the hidden intermediate layer, and an output layer with only one output. The six input parameters are the average intensity values and the intensity standard deviation of each RGB channel. The output

  9. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  10. A Small Disc Area Is a Risk Factor for Visual Field Loss Progression in Primary Open-Angle Glaucoma: The Glaucoma Stereo Analysis Study

    Directory of Open Access Journals (Sweden)

    Yasushi Kitaoka

    2018-01-01

    Full Text Available Purpose. The Glaucoma Stereo Analysis Study, a cross-sectional multicenter collaborative study, used a stereo fundus camera (nonmyd WX to assess various morphological parameters of the optic nerve head (ONH in glaucoma patients. We compared the associations of each parameter between the visual field loss progression group and no-progression group. Methods. The study included 187 eyes of 187 patients with primary open-angle glaucoma or normal-tension glaucoma. We divided the mean deviation (MD slope values of all patients into the progression group (<−0.3 dB/year and no-progression group (≧−0.3 dB/year. ONH morphological parameters were calculated with prototype analysis software. The correlations between glaucomatous visual field progression and patient characteristics or each ONH parameter were analyzed with Spearman’s rank correlation coefficient. Results. The MD slope averages in the progression group and no-progression group were −0.58 ± 0.28 dB/year and 0.05 ± 0.26 dB/year, respectively. Among disc parameters, vertical disc width (diameter, disc area, cup area, and cup volume in the progression group were significantly less than those in the no-progression group. Logistic regression analysis revealed a significant association between the visual field progression and disc area (odds ratio 0.49/mm2 disc area. Conclusion. A smaller disc area may be associated with more rapid glaucomatous visual field progression.

  11. Ultra fast x-ray streak camera

    International Nuclear Information System (INIS)

    Coleman, L.W.; McConaghy, C.F.

    1975-01-01

    A unique ultrafast x-ray sensitive streak camera, with a time resolution of 50psec, has been built and operated. A 100A thick gold photocathode on a beryllium vacuum window is used in a modified commerical image converter tube. The X-ray streak camera has been used in experiments to observe time resolved emission from laser-produced plasmas. (author)

  12. An Open Standard for Camera Trap Data

    Directory of Open Access Journals (Sweden)

    Tavis Forrester

    2016-12-01

    Full Text Available Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an open data standard for storing and sharing camera trap data, developed by experts from a variety of organizations. The standard captures information necessary to share data between projects and offers a foundation for collecting the more detailed data needed for advanced analysis. The data standard captures information about study design, the type of camera used, and the location and species names for all detections in a standardized way. This information is critical for accurately assessing results from individual camera trapping projects and for combining data from multiple studies for meta-analysis. This data standard is an important step in aligning camera trapping surveys with best practices in data-intensive science. Ecology is moving rapidly into the realm of big data, and central data repositories are becoming a critical tool and are emerging for camera trap data. This data standard will help researchers standardize data terms, align past data to new repositories, and provide a framework for utilizing data across repositories and research projects to advance animal ecology and conservation.

  13. Active spectral imaging nondestructive evaluation (SINDE) camera

    Energy Technology Data Exchange (ETDEWEB)

    Simova, E.; Rochefort, P.A., E-mail: eli.simova@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)

    2016-06-15

    A proof-of-concept video camera for active spectral imaging nondestructive evaluation has been demonstrated. An active multispectral imaging technique has been implemented in the visible and near infrared by using light emitting diodes with wavelengths spanning from 400 to 970 nm. This shows how the camera can be used in nondestructive evaluation to inspect surfaces and spectrally identify materials and corrosion. (author)

  14. CCD Color Camera Characterization for Image Measurements

    NARCIS (Netherlands)

    Withagen, P.J.; Groen, F.C.A.; Schutte, K.

    2007-01-01

    In this article, we will analyze a range of different types of cameras for its use in measurements. We verify a general model of a charged coupled device camera using experiments. This model includes gain and offset, additive and multiplicative noise, and gamma correction. It is shown that for

  15. Digital airborne camera introduction and technology

    CERN Document Server

    Sandau, Rainer

    2014-01-01

    The last decade has seen great innovations on the airborne camera. This book is the first ever written on the topic and describes all components of a digital airborne camera ranging from the object to be imaged to the mass memory device.

  16. Driving with head-slaved camera system

    NARCIS (Netherlands)

    Oving, A.B.; Erp, J.B.F. van

    2001-01-01

    In a field experiment, we tested the effectiveness of a head-slaved camera system for driving an armoured vehicle under armour. This system consists of a helmet-mounted display (HMD), a headtracker, and a motion platform with two cameras. Subjects performed several driving tasks on paved and in

  17. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  18. Optimization of Camera Parameters in Volume Intersection

    Science.gov (United States)

    Sakamoto, Sayaka; Shoji, Kenji; Toyama, Fubito; Miyamichi, Juichi

    Volume intersection is one of the simplest techniques for reconstructing 3-D shape from 2-D silhouettes. 3D shapes can be reconstructed from multiple view images by back-projecting them from the corresponding viewpoints and intersecting the resulting solid cones. The camera position and orientation (extrinsic camera parameters) of each viewpoint with respect to the object are needed to accomplish reconstruction. However, even a little variation in the camera parameters makes the reconstructed 3-D shape smaller than that with the exact parameters. The problem of optimizing camera parameters deals with determining exact ones based on multiple silhouette images and approximate ones. This paper examines attempts to optimize camera parameters by reconstructing a 3-D shape via the concept of volume intersection and then maximizing the volume of the 3-D shape. We have tested the proposed method to optimize the camera parameters using a VRML model. In experiments we apply the downhill simplex method to optimize them. The results of experiments show that the maximized volume of the reconstructed 3-D shape is one of the criteria to optimize camera parameters in camera arrangement like this experiment.

  19. Laser scanning camera inspects hazardous area

    International Nuclear Information System (INIS)

    Fryatt, A.; Miprode, C.

    1985-01-01

    Main operational characteristics of a new laser scanning camera are presented. The camera is intended primarily for low level high resolution viewing inside nuclear reactors. It uses a He-Ne laser beam raster; by detecting the reflected light by means of a phomultiplier, the subject under observation can be reconstructed in an electronic video store and reviewed on a conventional monitor screen

  20. Rosetta Star Tracker and Navigation Camera

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera.......Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera....

  1. Centering mount for a gamma camera

    International Nuclear Information System (INIS)

    Mirkhodzhaev, A.Kh.; Kuznetsov, N.K.; Ostryj, Yu.E.

    1988-01-01

    A device for centering a γ-camera detector in case of radionuclide diagnosis is described. It permits the use of available medical coaches instead of a table with a transparent top. The device can be used for centering a detector (when it is fixed at the low end of a γ-camera) on a required area of the patient's body

  2. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  3. TOPOGRAPHIC LOCAL ROUGHNESS EXTRACTION AND CALIBRATION OVER MARTIAN SURFACE BY VERY HIGH RESOLUTION STEREO ANALYSIS AND MULTI SENSOR DATA FUSION

    Directory of Open Access Journals (Sweden)

    J. R. Kim

    2012-08-01

    Full Text Available The planetary topography has been the main focus of the in-orbital remote sensing. In spite of the recent development in active and passive sensing technologies to reconstruct three dimensional planetary topography, the resolution limit of range measurement is theoretically and practically obvious. Therefore, the extraction of inner topographical height variation within a measurement spot is very challengeable and beneficial topic for the many application fields such as the identification of landform, Aeolian process analysis and the risk assessment of planetary lander. In this study we tried to extract the topographic height variation over martian surface so called local roughness with different approaches. One method is the employment of laser beam broadening effect and the other is the multi angle optical imaging. Especially, in both cases, the precise pre processing employing high accuracy DTM (Digital Terrain Model were introduced to minimise the possible errors. Since a processing routine to extract very high resolution DTMs up to 0.5–4m grid-spacing from HiRISE (High Resolution Imaging Science Experiment and 20–10m DTM from CTX (Context Camera stereo pair has been developed, it is now possible to calibrate the local roughness compared with the calculated height variation from very high resolution topographic products. Three testing areas were chosen and processed to extract local roughness with the co-registered multi sensor data sets. Even though, the extracted local roughness products are still showing the strong correlation with the topographic slopes, we demonstrated the potentials of the height variations extraction and calibration methods.

  4. Lessons Learned from Crime Caught on Camera

    Science.gov (United States)

    Bernasco, Wim

    2018-01-01

    Objectives: The widespread use of camera surveillance in public places offers criminologists the opportunity to systematically and unobtrusively observe crime, their main subject matter. The purpose of this essay is to inform the reader of current developments in research on crimes caught on camera. Methods: We address the importance of direct observation of behavior and review criminological studies that used observational methods, with and without cameras, including the ones published in this issue. We also discuss the uses of camera recordings in other social sciences and in biology. Results: We formulate six key insights that emerge from the literature and make recommendations for future research. Conclusions: Camera recordings of real-life crime are likely to become part of the criminological tool kit that will help us better understand the situational and interactional elements of crime. Like any source, it has limitations that are best addressed by triangulation with other sources. PMID:29472728

  5. Lessons Learned from Crime Caught on Camera

    DEFF Research Database (Denmark)

    Lindegaard, Marie Rosenkrantz; Bernasco, Wim

    2018-01-01

    Objectives: The widespread use of camera surveillance in public places offers criminologists the opportunity to systematically and unobtrusively observe crime, their main subject matter. The purpose of this essay is to inform the reader of current developments in research on crimes caught on camera....... Methods: We address the importance of direct observation of behavior and review criminological studies that used observational methods, with and without cameras, including the ones published in this issue. We also discuss the uses of camera recordings in other social sciences and in biology. Results: We...... formulate six key insights that emerge from the literature and make recommendations for future research. Conclusions: Camera recordings of real-life crime are likely to become part of the criminological tool kit that will help us better understand the situational and interactional elements of crime. Like...

  6. Architecture of PAU survey camera readout electronics

    Science.gov (United States)

    Castilla, Javier; Cardiel-Sas, Laia; De Vicente, Juan; Illa, Joseph; Jimenez, Jorge; Maiorino, Marino; Martinez, Gustavo

    2012-07-01

    PAUCam is a new camera for studying the physics of the accelerating universe. The camera will consist of eighteen 2Kx4K HPK CCDs: sixteen for science and two for guiding. The camera will be installed at the prime focus of the WHT (William Herschel Telescope). In this contribution, the architecture of the readout electronics system is presented. Back- End and Front-End electronics are described. Back-End consists of clock, bias and video processing boards, mounted on Monsoon crates. The Front-End is based on patch panel boards. These boards are plugged outside the camera feed-through panel for signal distribution. Inside the camera, individual preamplifier boards plus kapton cable completes the path to connect to each CCD. The overall signal distribution and grounding scheme is shown in this paper.

  7. Superconducting millimetre-wave cameras

    Science.gov (United States)

    Monfardini, Alessandro

    2017-05-01

    I present a review of the developments in kinetic inductance detectors (KID) for mm-wave and THz imaging-polarimetry in the framework of the Grenoble collaboration. The main application that we have targeted so far is large field-of-view astronomy. I focus in particular on our own experiment: NIKA2 (Néel IRAM KID Arrays). NIKA2 is today the largest millimetre camera available to the astronomical community for general purpose observations. It consists of a dual-band, dual-polarisation, multi-thousands pixels system installed at the IRAM 30-m telescope at Pico Veleta (Spain). I start with a general introduction covering the underlying physics and the KID working principle. Then I describe briefly the instrument and the detectors, to conclude with examples of pictures taken on the Sky by NIKA2 and its predecessor, NIKA. Thanks to these results, together with the relative simplicity and low cost of the KID fabrication, industrial applications requiring passive millimetre-THz imaging have now become possible.

  8. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...

  9. Sound exposure by personal stereo, field study of young people in Denmark

    DEFF Research Database (Denmark)

    Ordoñez, Rodrigo Pizarro; Reuter, Karen; Hammershøi, Dorte

    2006-01-01

    A number of large scale studies suggest that the exposure level used with personal stereo systems should raise concern. It has been demonstrated that 1) high levels can be produced, 2) high levels are used, especially in situations with high background noise, 3) exposure levels are comparables...

  10. Robotic Arm Control Algorithm Based on Stereo Vision Using RoboRealm Vision

    Directory of Open Access Journals (Sweden)

    SZABO, R.

    2015-05-01

    Full Text Available The goal of this paper is to present a stereo computer vision algorithm intended to control a robotic arm. Specific points on the robot joints are marked and recognized in the software. Using a dedicated set of mathematic equations, the movement of the robot is continuously computed and monitored with webcams. Positioning error is finally analyzed.

  11. First bulk and surface results for the ATLAS ITk stereo annulus sensors

    CERN Document Server

    Abidi, Syed Haider; The ATLAS collaboration; Bohm, Jan; Botte, James Michael; Ciungu, Bianca; Dette, Karola; Dolezal, Zdenek; Escobar, Carlos; Fadeyev, Vitaliy; Fernandez-Tejero, Xavi; Garcia-Argos, Carlos; Gillberg, Dag; Hara, Kazuhiko; Hunter, Robert Francis Holub

    2018-01-01

    A novel microstrip sensor geometry, the “stereo annulus”, has been developed for use in the end-cap of the ATLAS experiment’s strip tracker upgrade at the High-Luminosity Large Hadron Collider (HL- LHC). The radiation-hard, single-sided, ac-coupled, n + -in-p microstrip sensors are designed by the ITk Strip Sensor Collaboration and produced by Hamamatsu Photonics. The stereo annulus design has the potential to revolutionize the layout of end-cap microstrip trackers promising better tracking performance and more complete coverage than the contemporary configurations. These advantages are achieved by the union of equal length, radially oriented strips with a small stereo angle implemented directly into the sensor surface. The first-ever results for the stereo annulus geometry have been collected across several sites world- wide and are presented here. A number of full-size, unirradiated sensors were evaluated for their mechanical, bulk, and surface properties. The new device, the ATLAS12EC, is compared ag...

  12. The Impact of Stereo Display on Student Understanding of Phases of the Moon

    Science.gov (United States)

    Cid, Ximena C.; Lopez, Ramon E.

    2010-01-01

    Understanding lunar phases requires three-dimensional information about the relative positions of the Moon, Earth, and Sun, thus using a stereo display in instruction might improve student comprehension of lunar phases or other topics in basic astronomy. We conducted a laboratory (15 sections) on phases of the Moon as part of the introductory…

  13. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC

    Directory of Open Access Journals (Sweden)

    Zhangwei Chen

    2013-03-01

    Full Text Available This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users’ configuration data. The Sum of Absolute Differences (SAD algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels.

  14. SAD-based stereo vision machine on a System-on-Programmable-Chip (SoPC).

    Science.gov (United States)

    Zhang, Xiang; Chen, Zhangwei

    2013-03-04

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels.

  15. Test beam results of a stereo preshower integrated in the liquid argon accordion calorimeter

    CERN Document Server

    Davis, R; Greenious, G; Kitching, P; Olsen, B; Pinfold, James L; Rodning, N L; Boos, E; Zhautykov, B O; Aubert, Bernard; Bazan, A; Beaugiraud, B; Boniface, J; Colas, Jacques; Eynard, G; Jézéquel, S; Le Flour, T; Linossier, O; Nicoleau, S; Sauvage, G; Thion, J; Van den Plas, D; Wingerter-Seez, I; Zitoun, R; Zolnierowski, Y; Chmeissani, M; Fernández, E; Garrido, L; Martínez, M; Padilla, C; Citterio, M; Gordon, H A; Lissauer, D; Ma, H; Makowiecki, D S; Radeka, V; Rahm, David Charles; Rescia, S; Stephani, D; Takai, H; Baisin, L; Berset, J C; Chevalley, J L; Gianotti, F; Gildemeister, O; Marin, C P; Nessi, Marzio; Poggioli, Luc; Richter, W; Vuillemin, V; Baze, J M; Delagnes, E; Gosset, L G; Lavocat, P; Lottin, J P; Mansoulié, B; Meyer, J P; Renardy, J F; Schwindling, J; Simion, S; Taguet, J P; Teiger, J; Walter, C; Collot, J; de Saintignon, P; Hostachy, J Y; Mahout, G; Barreiro, F; Del Peso, J; García, J; Hervás, L; Labarga, L; Romero, P; Scheel, C V; Chekhtman, A; Cousinou, M C; Dargent, P; Dinkespiler, B; Etienne, F; Fassnacht, P; Fouchez, D; Martin, L; Miotto, A; Monnier, E; Nagy, E; Olivetto, C; Tisserant, S; Battistoni, G; Camin, D V; Cavalli, D; Costa, G; Cozzi, L; Fedyakin, N N; Ferrari, A; Mandelli, L; Mazzanti, M; Perini, L; Resconi, S; Sala, P R; Beaudoin, G; Depommier, P; León-Florián, E; Leroy, C; Roy, P; Augé, E; Breton, D; Chase, Robert L; Chollet, J C; de La Taille, C; Fayard, Louis; Fournier, D; González, J; Hrisoho, A T; Jacquier, Y; Merkel, B; Nikolic, I A; Noppe, J M; Parrour, G; Pétroff, P; Puzo, P; Richer, J P; Schaffer, A C; Seguin-Moreau, N; Serin, L; Tisserand, V; Veillet, J J; Vichou, I; Canton, B; David, J; Genat, J F; Imbault, D; Le Dortz, O; Savoy-Navarro, Aurore; Schwemling, P; Eek, L O; Lund-Jensen, B; Söderqvist, J; Astbury, Alan; Keeler, Richard K; Lefebvre, M; Robertson, S; White, J

    1998-01-01

    This paper describes the construction of an integrated preshower within the RD3 liquid argon accordion calorimeter. It has a stereo view which enables the measurement of two transverse coordinates. The prototype was tested at CERN with electrons, photons and muons to validate its capability to work at LHC ( Energy resolution, impact point resolution, angular resolution, $\\pi^o$/$\\gamma$ rejection ).

  16. Sound exposure by personal stereo, field study of young people in Denmark

    DEFF Research Database (Denmark)

    Ordoñez, Rodrigo Pizarro; Reuter, Karen; Hammershøi, Dorte

    2006-01-01

    A number of large scale studies suggest that the exposure level used with personal stereo systems should raise concern. It has been demonstrated that 1) high levels can be produced, 2) high levels are used, especially in situations with high background noise, 3) exposure levels are comparables wi...

  17. First bulk and surface results for the ATLAS ITk Strip stereo annulus sensors

    CERN Document Server

    Hunter, Robert Francis Holub; The ATLAS collaboration; Affolder, Tony; Bohm, Jan; Botte, James Michael; Ciungu, Bianca; Dette, Karola; Dolezal, Zdenek; Escobar, Carlos; Fadeyev, Vitaliy

    2018-01-01

    A novel microstrip sensor geometry, the stereo annulus, has been developed for use in the end-cap of the ATLAS experiment's strip tracker upgrade at the HL-LHC. Its first implementation is in the ATLAS12EC sensors a large-area, radiation-hard, single-sided, ac-coupled, \

  18. Siim Nestor soovitab : StereoÖö. Bugz In The Attic albumituur. Vunts / Siim Nestor

    Index Scriptorium Estoniae

    Nestor, Siim, 1974-

    2006-01-01

    Technomuusikapeost StereoÖö Tartus klubis Illusion 16. märtsil ja Tallinnas Von Krahlis 17. märtsil. Üritusest Mutant Disko 17. märtsil Tallinnas klubis Privé ja Tartus 18. märtsil klubis Illusion. Diskoõhtust "Vunts" 18. märtsil Kinomajas Tallinnas

  19. S/WAVES: The Radio and Plasma Wave Investigation on the STEREO Mission

    Czech Academy of Sciences Publication Activity Database

    Bougeret, J. L.; Goetz, K.; Kaiser, M. L.; Bale, S. D.; Kellogg, P. J.; Maksimovic, M.; Monge, N.; Monson, S. J.; Astier, P. L.; Davy, S.; Dekkali, M.; Hinze, J. J.; Manning, R. E.; Aguilar-Rodriguez, E.; Bonnin, X.; Briand, C.; Cairns, I. H.; Cattell, C. A.; Cecconi, B.; Eastwood, J.; Ergun, R. E.; Fainberg, J.; Hoang, S.; Huttunen, K. E. J.; Krucker, S.; Lecacheux, A.; MacDowall, R. J.; Macher, W.; Mangeney, A.; Meetre, C. A.; Moussas, X.; Nguyen, Q. N.; Oswald, T. H.; Pulupa, M.; Reiner, M. J.; Robinson, P. A.; Rucker, H.; Salem, c.; Santolík, Ondřej; Silvis, J. M.; Ullrich, R.; Zarka, P.; Zouganelis, I.

    2008-01-01

    Roč. 136, 1-4 (2008), s. 487-528 ISSN 0038-6308 Grant - others:NASA(US) NAS5-03076 Institutional research plan: CEZ:AV0Z30420517 Keywords : S/WAVES * STEREO * plasma waves * radio waves Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 2.372, year: 2008

  20. Stereo Vision and 3D Reconstruction on a Distributed Memory System

    NARCIS (Netherlands)

    Kuijpers, N.H.L.; Paar, G.; Lukkien, J.J.

    1996-01-01

    An important research topic in image processing is stereo vision. The objective is to compute a 3-dimensional representation of some scenery from two 2-dimensional digital images. Constructing a 3-dimensional representation involves finding pairs of pixels from the two images which correspond to the