WorldWideScience

Sample records for binocular stereo vision

  1. Modeling the convergence accommodation of stereo vision for binocular endoscopy.

    Science.gov (United States)

    Gao, Yuanqian; Li, Jinhua; Li, Jianmin; Wang, Shuxin

    2018-02-01

    The stereo laparoscope is an important tool for achieving depth perception in robot-assisted minimally invasive surgery (MIS). A dynamic convergence accommodation algorithm is proposed to improve the viewing experience and achieve accurate depth perception. Based on the principle of the human vision system, a positional kinematic model of the binocular view system is established. The imaging plane pair is rectified to ensure that the two rectified virtual optical axes intersect at the fixation target to provide immersive depth perception. Stereo disparity was simulated with the roll and pitch movements of the binocular system. The chessboard test and the endoscopic peg transfer task were performed, and the results demonstrated the improved disparity distribution and robustness of the proposed convergence accommodation method with respect to the position of the fixation target. This method offers a new solution for effective depth perception with the stereo laparoscopes used in robot-assisted MIS. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Railway clearance intrusion detection method with binocular stereo vision

    Science.gov (United States)

    Zhou, Xingfang; Guo, Baoqing; Wei, Wei

    2018-03-01

    In the stage of railway construction and operation, objects intruding railway clearance greatly threaten the safety of railway operation. Real-time intrusion detection is of great importance. For the shortcomings of depth insensitive and shadow interference of single image method, an intrusion detection method with binocular stereo vision is proposed to reconstruct the 3D scene for locating the objects and judging clearance intrusion. The binocular cameras are calibrated with Zhang Zhengyou's method. In order to improve the 3D reconstruction speed, a suspicious region is firstly determined by background difference method of a single camera's image sequences. The image rectification, stereo matching and 3D reconstruction process are only executed when there is a suspicious region. A transformation matrix from Camera Coordinate System(CCS) to Track Coordinate System(TCS) is computed with gauge constant and used to transfer the 3D point clouds into the TCS, then the 3D point clouds are used to calculate the object position and intrusion in TCS. The experiments in railway scene show that the position precision is better than 10mm. It is an effective way for clearance intrusion detection and can satisfy the requirement of railway application.

  3. Bubble behavior characteristics based on virtual binocular stereo vision

    Science.gov (United States)

    Xue, Ting; Xu, Ling-shuang; Zhang, Shang-zhen

    2018-01-01

    The three-dimensional (3D) behavior characteristics of bubble rising in gas-liquid two-phase flow are of great importance to study bubbly flow mechanism and guide engineering practice. Based on the dual-perspective imaging of virtual binocular stereo vision, the 3D behavior characteristics of bubbles in gas-liquid two-phase flow are studied in detail, which effectively increases the projection information of bubbles to acquire more accurate behavior features. In this paper, the variations of bubble equivalent diameter, volume, velocity and trajectory in the rising process are estimated, and the factors affecting bubble behavior characteristics are analyzed. It is shown that the method is real-time and valid, the equivalent diameter of the rising bubble in the stagnant water is periodically changed, and the crests and troughs in the equivalent diameter curve appear alternately. The bubble behavior characteristics as well as the spiral amplitude are affected by the orifice diameter and the gas volume flow.

  4. An assembly system based on industrial robot with binocular stereo vision

    Science.gov (United States)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  5. The wavelet transform and the suppression theory of binocular vision for stereo image compression

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, W.D. Jr [Argonne National Lab., IL (United States); Kenyon, R.V. [Illinois Univ., Chicago, IL (United States)

    1996-08-01

    In this paper a method for compression of stereo images. The proposed scheme is a frequency domain approach based on the suppression theory of binocular vision. By using the information in the frequency domain, complex disparity estimation techniques can be avoided. The wavelet transform is used to obtain a multiresolution analysis of the stereo pair by which the subbands convey the necessary frequency domain information.

  6. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  7. Effective Data-Driven Calibration for a Galvanometric Laser Scanning System Using Binocular Stereo Vision.

    Science.gov (United States)

    Tu, Junchao; Zhang, Liyan

    2018-01-12

    A new solution to the problem of galvanometric laser scanning (GLS) system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM). By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.

  8. Effective Data-Driven Calibration for a Galvanometric Laser Scanning System Using Binocular Stereo Vision

    Directory of Open Access Journals (Sweden)

    Junchao Tu

    2018-01-01

    Full Text Available A new solution to the problem of galvanometric laser scanning (GLS system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM. By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.

  9. Obstacle Detection using Binocular Stereo Vision in Trajectory Planning for Quadcopter Navigation

    Science.gov (United States)

    Bugayong, Albert; Ramos, Manuel, Jr.

    2018-02-01

    Quadcopters are one of the most versatile unmanned aerial vehicles due to its vertical take-off and landing as well as hovering capabilities. This research uses the Sum of Absolute Differences (SAD) block matching algorithm for stereo vision. A complementary filter was used in sensor fusion to combine obtained quadcopter orientation data from the accelerometer and the gyroscope. PID control was implemented for the motor control and VFH+ algorithm was implemented for trajectory planning. Results show that the quadcopter was able to consistently actuate itself in the roll, yaw and z-axis during obstacle avoidance but was however found to be inconsistent in the pitch axis during forward and backward maneuvers due to the significant noise present in the pitch axis angle outputs compared to the roll and yaw axes.

  10. Parallel algorithm for dominant points correspondences in robot binocular stereo vision

    Science.gov (United States)

    Al-Tammami, A.; Singh, B.

    1993-01-01

    This paper presents an algorithm to find the correspondences of points representing dominant feature in robot stereo vision. The algorithm consists of two main steps: dominant point extraction and dominant point matching. In the feature extraction phase, the algorithm utilizes the widely used Moravec Interest Operator and two other operators: the Prewitt Operator and a new operator called Gradient Angle Variance Operator. The Interest Operator in the Moravec algorithm was used to exclude featureless areas and simple edges which are oriented in the vertical, horizontal, and two diagonals. It was incorrectly detecting points on edges which are not on the four main directions (vertical, horizontal, and two diagonals). The new algorithm uses the Prewitt operator to exclude featureless areas, so that the Interest Operator is applied only on the edges to exclude simple edges and to leave interesting points. This modification speeds-up the extraction process by approximately 5 times. The Gradient Angle Variance (GAV), an operator which calculates the variance of the gradient angle in a window around the point under concern, is then applied on the interesting points to exclude the redundant ones and leave the actual dominant ones. The matching phase is performed after the extraction of the dominant points in both stereo images. The matching starts with dominant points in the left image and does a local search, looking for corresponding dominant points in the right image. The search is geometrically constrained the epipolar line of the parallel-axes stereo geometry and the maximum disparity of the application environment. If one dominant point in the right image lies in the search areas, then it is the corresponding point of the reference dominant point in the left image. A parameter provided by the GAV is thresholded and used as a rough similarity measure to select the corresponding dominant point if there is more than one point the search area. The correlation is used as

  11. Binocular combination in abnormal binocular vision.

    Science.gov (United States)

    Ding, Jian; Klein, Stanley A; Levi, Dennis M

    2013-02-08

    We investigated suprathreshold binocular combination in humans with abnormal binocular visual experience early in life. In the first experiment we presented the two eyes with equal but opposite phase shifted sine waves and measured the perceived phase of the cyclopean sine wave. Normal observers have balanced vision between the two eyes when the two eyes' images have equal contrast (i.e., both eyes contribute equally to the perceived image and perceived phase = 0°). However, in observers with strabismus and/or amblyopia, balanced vision requires a higher contrast image in the nondominant eye (NDE) than the dominant eye (DE). This asymmetry between the two eyes is larger than predicted from the contrast sensitivities or monocular perceived contrast of the two eyes and is dependent on contrast and spatial frequency: more asymmetric with higher contrast and/or spatial frequency. Our results also revealed a surprising NDE-to-DE enhancement in some of our abnormal observers. This enhancement is not evident in normal vision because it is normally masked by interocular suppression. However, in these abnormal observers the NDE-to-DE suppression was weak or absent. In the second experiment, we used the identical stimuli to measure the perceived contrast of a cyclopean grating by matching the binocular combined contrast to a standard contrast presented to the DE. These measures provide strong constraints for model fitting. We found asymmetric interocular interactions in binocular contrast perception, which was dependent on both contrast and spatial frequency in the same way as in phase perception. By introducing asymmetric parameters to the modified Ding-Sperling model including interocular contrast gain enhancement, we succeeded in accounting for both binocular combined phase and contrast simultaneously. Adding binocular contrast gain control to the modified Ding-Sperling model enabled us to predict the results of dichoptic and binocular contrast discrimination experiments

  12. Rebalancing binocular vision in amblyopia.

    Science.gov (United States)

    Ding, Jian; Levi, Dennis M

    2014-03-01

    Humans with amblyopia have an asymmetry in binocular vision: neural signals from the amblyopic eye are suppressed in the cortex by the fellow eye. The purpose of this study was to develop new models and methods for rebalancing this asymmetric binocular vision by manipulating the contrast and luminance in the two eyes. We measured the perceived phase of a cyclopean sinewave by asking normal and amblyopic observers to indicate the apparent location (phase) of the dark trough in the horizontal cyclopean sine wave relative to a black horizontal reference line, and used the same stimuli to measure perceived contrast by matching the binocular combined contrast to a standard contrast presented to one eye. We varied both the relative contrast and luminance of the two eyes' inputs, in order to rebalance the asymmetric binocular vision. Amblyopic binocular vision becomes more and more asymmetric the higher the stimulus contrast or spatial frequency. Reanalysing our previous data, we found that, at a given spatial frequency, the binocular asymmetry could be described by a log-linear formula with two parameters, one for the maximum asymmetry and one for the rate at which the binocular system becomes asymmetric as the contrast increases. Our new data demonstrates that reducing the dominant eye's mean luminance reduces its suppression of the non-dominant eye, and therefore rebalances the asymmetric binocular vision. While the binocular asymmetry in amblyopic vision can be rebalanced by manipulating the relative contrast or luminance of the two eyes at a given spatial frequency and contrast, it is very difficult or even impossible to rebalance the asymmetry for all visual conditions. Nonetheless, wearing a neutral density filter before the dominant eye (or increasing the mean luminance in the non-dominant eye) may be more beneficial than the traditional method of patching the dominant eye for treating amblyopia. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The

  13. Stereo Vision Inside Tire

    Science.gov (United States)

    2015-08-21

    1 Stereo Vision Inside Tire P.S. Els C.M. Becker University of Pretoria W911NF-14-1-0590 Final...Stereo Vision Inside Tire 5a. CONTRACT NUMBER W911NF-14-1-0590 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Prof PS Els CM...on the development of a stereo vision system that can be mounted inside a rolling tire , known as T2-CAM for Tire -Terrain CAMera. The T2-CAM system

  14. Amblyopia and binocular vision.

    Science.gov (United States)

    Birch, Eileen E

    2013-03-01

    Amblyopia is the most common cause of monocular visual loss in children, affecting 1.3%-3.6% of children. Current treatments are effective in reducing the visual acuity deficit but many amblyopic individuals are left with residual visual acuity deficits, ocular motor abnormalities, deficient fine motor skills, and risk for recurrent amblyopia. Using a combination of psychophysical, electrophysiological, imaging, risk factor analysis, and fine motor skill assessment, the primary role of binocular dysfunction in the genesis of amblyopia and the constellation of visual and motor deficits that accompany the visual acuity deficit has been identified. These findings motivated us to evaluate a new, binocular approach to amblyopia treatment with the goals of reducing or eliminating residual and recurrent amblyopia and of improving the deficient ocular motor function and fine motor skills that accompany amblyopia. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Panoramic stereo sphere vision

    Science.gov (United States)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  16. Research on three-dimensional reconstruction method based on binocular vision

    Science.gov (United States)

    Li, Jinlin; Wang, Zhihui; Wang, Minjun

    2018-03-01

    As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.

  17. Relating binocular and monocular vision in strabismic and anisometropic amblyopia.

    Science.gov (United States)

    Agrawal, Ritwick; Conner, Ian P; Odom, J V; Schwartz, Terry L; Mendola, Janine D

    2006-06-01

    To examine deficits in monocular and binocular vision in adults with amblyopia and to test the following 2 hypotheses: (1) Regardless of clinical subtype, the degree of impairment in binocular integration predicts the pattern of monocular acuity deficits. (2) Subjects who lack binocular integration exhibit the most severe interocular suppression. Seven subjects with anisometropia, 6 subjects with strabismus, and 7 control subjects were tested. Monocular tests included Snellen acuity, grating acuity, Vernier acuity, and contrast sensitivity. Binocular tests included Titmus stereo test, binocular motion integration, and dichoptic contrast masking. As expected, both groups showed deficits in monocular acuity, with subjects with strabismus showing greater deficits in Vernier acuity. Both amblyopic groups were then characterized according to the degree of residual stereoacuity and binocular motion integration ability, and 67% of subjects with strabismus compared with 29% of subjects with anisometropia were classified as having "nonbinocular" vision according to our criterion. For this nonbinocular group, Vernier acuity is most impaired. In addition, the nonbinocular group showed the most dichoptic contrast masking of the amblyopic eye and the least dichoptic contrast masking of the fellow eye. The degree of residual binocularity and interocular suppression predicts monocular acuity and may be a significant etiological mechanism of vision loss.

  18. The research of binocular vision ranging system based on LabVIEW

    Science.gov (United States)

    Li, Shikuan; Yang, Xu

    2017-10-01

    Based on the study of the principle of binocular parallax ranging, a binocular vision ranging system is designed and built. The stereo matching algorithm is realized by LabVIEW software. The camera calibration and distance measurement are completed. The error analysis shows that the system fast, effective, can be used in the corresponding industrial occasions.

  19. Colour-grapheme synaesthesia affects binocular vision

    Directory of Open Access Journals (Sweden)

    Chris L.E. Paffen

    2011-11-01

    Full Text Available In colour-grapheme synaesthesia, non-coloured graphemes are perceived as being inherently coloured. In recent years, it has become evident that synaesthesia-inducing graphemes can affect visual processing in a manner comparable to real, physical colours. Here, we exploit the phenomenon of binocular rivalry in which incompatible images presented dichoptically compete for conscious expression. Importantly, the competition only arises if the two images are sufficiently different; if the difference between the images is small, the images will fuse into a single mixed percept. We show that achromatic graphemes that induce synaesthetic colour percepts evoke binocular rivalry, while without the synaesthetic percept, they do not. That is, compared to achromatically perceived graphemes, synaesthesia-inducing graphemes increase the predominance of binocular rivalry over binocular fusion. This finding shows that the synaesthetic colour experience can provide the conditions for evoking binocular rivalry, much like stimulus features that induce rivalry in normal vision.

  20. Restoration of binocular vision in amblyopia.

    Science.gov (United States)

    Hess, R F; Mansouri, B; Thompson, B

    2011-09-01

    To develop a treatment for amblyopia based on re-establishing binocular vision. A novel procedure is outlined for measuring and reducing the extent to which the fixing eye suppresses the fellow amblyopic eye in adults with amblyopia. We hypothesize that suppression renders a structurally binocular system, functionally monocular. We demonstrate that strabismic amblyopes can combine information normally between their eyes under viewing conditions where suppression is reduced by presenting stimuli of different contrast to each eye. Furthermore we show that prolonged periods of binocular combination leads to a strengthening of binocular vision in strabismic amblyopes and eventual combination of binocular information under natural viewing conditions (stimuli of the same contrast in each eye). Concomitant improvement in monocular acuity of the amblyopic eye occurs with this reduction in suppression and strengthening of binocular fusion. Additionally, stereoscopic function was established in the majority of patients tested. We have implemented this approach on a headmounted device as well as on a handheld iPod. This provides the basis for a new treatment of amblyopia, one that is purely binocular and aimed at reducing suppression as a first step.

  1. Obstacle detection by stereo vision of fast correlation matching

    International Nuclear Information System (INIS)

    Jeon, Seung Hoon; Kim, Byung Kook

    1997-01-01

    Mobile robot navigation needs acquiring positions of obstacles in real time. A common method for performing this sensing is through stereo vision. In this paper, indoor images are acquired by binocular vision, which contains various shapes of obstacles. From these stereo image data, in order to obtain distances to obstacles, we must deal with the correspondence problem, or get the region in the other image corresponding to the projection of the same surface region. We present an improved correlation matching method enhancing the speed of arbitrary obstacle detection. The results are faster, simple matching, robustness to noise, and improvement of precision. Experimental results under actual surroundings are presented to reveal the performance. (author)

  2. Surrounding Moving Obstacle Detection for Autonomous Driving Using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Hao Sun

    2013-06-01

    Full Text Available Detection and tracking surrounding moving obstacles such as vehicles and pedestrians are crucial for the safety of mobile robotics and autonomous vehicles. This is especially the case in urban driving scenarios. This paper presents a novel framework for surrounding moving obstacles detection using binocular stereo vision. The contributions of our work are threefold. Firstly, a multiview feature matching scheme is presented for simultaneous stereo correspondence and motion correspondence searching. Secondly, the multiview geometry constraint derived from the relative camera positions in pairs of consecutive stereo views is exploited for surrounding moving obstacles detection. Thirdly, an adaptive particle filter is proposed for tracking of multiple moving obstacles in surrounding areas. Experimental results from real-world driving sequences demonstrate the effectiveness and robustness of the proposed framework.

  3. Stereo vision techniques for telescience

    Science.gov (United States)

    Hewett, S.

    1990-02-01

    The Botanic Experiment is one of the pilot experiments in the Telescience Test Bed program at the ESTEC research and technology center of the European Space Agency. The aim of the Telescience Test Bed is to develop the techniques required by an experimenter using a ground based work station for remote control, monitoring, and modification of an experiment operating on a space platform. The purpose of the Botanic Experiment is to examine the growth of seedlings under various illumination conditions with a video camera from a number of viewpoints throughout the duration of the experiment. This paper describes the Botanic Experiment and the points addressed in developing a stereo vision software package to extract quantitative information about the seedlings from the recorded video images.

  4. Binocular Vision in Chronic Fatigue Syndrome.

    Science.gov (United States)

    Godts, Daisy; Moorkens, Greta; Mathysen, Danny G P

    2016-01-01

    To compare binocular vision measurements between Chronic Fatigue Syndrome (CFS) patients and healthy controls. Forty-one CFS patients referred by the Reference Centre for Chronic Fatigue Syndrome of the Antwerp University Hospital and forty-one healthy volunteers, matched for age and gender, underwent a complete orthoptic examination. Data of visual acuity, eye position, fusion amplitude, stereopsis, ocular motility, convergence, and accommodation were compared between both groups. Patients with CFS showed highly significant smaller fusion amplitudes (P convergence capacity (P accommodation range (P convergence and accommodation should be routinely examined. CFS patients will benefit from reading glasses either with or without prism correction in an earlier stage compared to their healthy peers. Convergence exercises may be beneficial for CFS patients, despite the fact that they might be very tiring. Further research will be necessary to draw conclusions about the efficacy of treatment, especially regarding convergence exercises. To our knowledge, this is the first prospective study evaluating binocular vision in CFS patients. © 2016 Board of regents of the University of Wisconsin System, American Orthoptic Journal, Volume 66, 2016, ISSN 0065-955X, E-ISSN 1553-4448.

  5. Pengukuran Jarak Berbasiskan Stereo Vision

    Directory of Open Access Journals (Sweden)

    Iman Herwidiana Kartowisastro

    2010-12-01

    Full Text Available Measuring distance from an object can be conducted in a variety of ways, including by making use of distance measuring sensors such as ultrasonic sensors, or using the approach based vision system. This last mode has advantages in terms of flexibility, namely a monitored object has relatively no restrictions characteristic of the materials used at the same time it also has its own difficulties associated with object orientation and state of the room where the object is located. To overcome this problem, so this study examines the possibility of using stereo vision to measure the distance to an object. The system was developed starting from image extraction, information extraction characteristics of the objects contained in the image and visual distance measurement process with 2 separate cameras placed in a distance of 70 cm. The measurement object can be in the range of 50 cm - 130 cm with a percentage error of 5:53%. Lighting conditions (homogeneity and intensity has a great influence on the accuracy of the measurement results. 

  6. Stereo vision with distance and gradient recognition

    Science.gov (United States)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  7. Binocular Vision-Based Position and Pose of Hand Detection and Tracking in Space

    Science.gov (United States)

    Jun, Chen; Wenjun, Hou; Qing, Sheng

    After the study of image segmentation, CamShift target tracking algorithm and stereo vision model of space, an improved algorithm based of Frames Difference and a new space point positioning model were proposed, a binocular visual motion tracking system was constructed to verify the improved algorithm and the new model. The problem of the spatial location and pose of the hand detection and tracking have been solved.

  8. Symptomatology associated with accommodative and binocular vision anomalies

    Directory of Open Access Journals (Sweden)

    Ángel García-Muñoz

    2014-10-01

    Conclusions: There is a wide disparity of symptoms related to accommodative and binocular dysfunctions in the scientific literature, most of which are associated with near vision and binocular dysfunctions. The only psychometrically validated questionnaires that we found (n=3 were related to convergence insufficiency and to visual dysfunctions in general and there no specific questionnaires for other anomalies.

  9. GPU-based real-time trinocular stereo vision

    Science.gov (United States)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  10. Stereo vision enhances the learning of a catching skill

    NARCIS (Netherlands)

    Mazyn, L.; Lenoir, M.; Montagne, G.; Delaey, C; Savelsbergh, G.J.P.

    2007-01-01

    The aim of this study was to investigate the contribution of stereo vision to the acquisition of a natural interception task. Poor catchers with good (N = 8; Stereo+) and weak (N = 6; Stereo-) stereo vision participated in an intensive training program spread over 2 weeks, during which they caught

  11. Optoelectronic stereoscopic device for diagnostics, treatment, and developing of binocular vision

    Science.gov (United States)

    Pautova, Larisa; Elkhov, Victor A.; Ovechkis, Yuri N.

    2003-08-01

    Operation of the device is based on alternative generation of pictures for left and right eyes on the monitor screen. Controller gives pulses on LCG so that shutter for left or right eye opens synchronously with pictures. The device provides frequency of switching more than 100 Hz, and that is why the flickering is absent. Thus, a separate demonstration of images to the left eye or to the right one in turn is obtained for patients being unaware and creates the conditions of binocular perception clsoe to natural ones without any additional separation of vision fields. LC-cell transfer characteristic coodination with time parameters of monitor screen has enabled to improve stereo image quality. Complicated problem of computer stereo images with LC-glasses is so called 'ghosts' - noise images that come to blocked eye. We reduced its influence by adapting stereo images to phosphor and LC-cells characteristics. The device is intended for diagnostics and treatment of stabismus, amblyopia and other binocular and stereoscopic vision impairments, for cultivating, training and developing of stereoscopic vision, for measurements of horizontal and vertical phoria, phusion reserves, the stereovision acuity and some else, for fixing central scotoma borders, as well as suppression scotoma in strabismus too.

  12. Binocular vision in amblyopia: structure, suppression and plasticity.

    Science.gov (United States)

    Hess, Robert F; Thompson, Benjamin; Baker, Daniel H

    2014-03-01

    The amblyopic visual system was once considered to be structurally monocular. However, it now evident that the capacity for binocular vision is present in many observers with amblyopia. This has led to new techniques for quantifying suppression that have provided insights into the relationship between suppression and the monocular and binocular visual deficits experienced by amblyopes. Furthermore, new treatments are emerging that directly target suppressive interactions within the visual cortex and, on the basis of initial data, appear to improve both binocular and monocular visual function, even in adults with amblyopia. The aim of this review is to provide an overview of recent studies that have investigated the structure, measurement and treatment of binocular vision in observers with strabismic, anisometropic and mixed amblyopia. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  13. Binocular vision in amblyopia : structure, suppression and plasticity

    OpenAIRE

    Hess, Robert F; Thompson, Benjamin; Baker, Daniel Hart

    2014-01-01

    The amblyopic visual system was once considered to be structurally monocular. However, it now evident that the capacity for binocular vision is present in many observers with amblyopia. This has led to new techniques for quantifying suppression that have provided insights into the relationship between suppression and the monocular and binocular visual deficits experienced by amblyopes. Furthermore, new treatments are emerging that directly target suppressive interactions within the visual cor...

  14. Neuroimaging of amblyopia and binocular vision: a review.

    Science.gov (United States)

    Joly, Olivier; Frankó, Edit

    2014-01-01

    Amblyopia is a cerebral visual impairment considered to derive from abnormal visual experience (e.g., strabismus, anisometropia). Amblyopia, first considered as a monocular disorder, is now often seen as a primarily binocular disorder resulting in more and more studies examining the binocular deficits in the patients. The neural mechanisms of amblyopia are not completely understood even though they have been investigated with electrophysiological recordings in animal models and more recently with neuroimaging techniques in humans. In this review, we summarize the current knowledge about the brain regions that underlie the visual deficits associated with amblyopia with a focus on binocular vision using functional magnetic resonance imaging. The first studies focused on abnormal responses in the primary and secondary visual areas whereas recent evidence shows that there are also deficits at higher levels of the visual pathways within the parieto-occipital and temporal cortices. These higher level areas are part of the cortical network involved in 3D vision from binocular cues. Therefore, reduced responses in these areas could be related to the impaired binocular vision in amblyopic patients. Promising new binocular treatments might at least partially correct the activation in these areas. Future neuroimaging experiments could help to characterize the brain response changes associated with these treatments and help devise them.

  15. Neuroimaging of amblyopia and binocular vision: a review

    Directory of Open Access Journals (Sweden)

    Olivier eJoly

    2014-08-01

    Full Text Available Amblyopia is a cerebral visual impairment considered to derive from abnormal visual experience (e.g., strabismus, anisometropia. Amblyopia, first considered as a monocular disorder, is now often seen as a primarily binocular disorder resulting in more and more studies examining the binocular deficits in the patients. The neural mechanisms of amblyopia are not completely understood even though they have been investigated with electrophysiological recordings in animal models and more recently with neuroimaging techniques in humans. In this review, we summarise the current knowledge about the brain regions that underlie the visual deficits associated with amblyopia with a focus on binocular vision using functional magnetic resonance imaging (fMRI. The first studies focused on abnormal responses in the primary and secondary visual areas whereas recent evidence show that there are also deficits at higher levels of the visual pathways within the parieto-occipital and temporal cortices. These higher level areas are part of the cortical network involved in 3D vision from binocular cues. Therefore, reduced responses in these areas could be related to the impaired binocular vision in amblyopic patients. Promising new binocular treatments might at least partially correct the activation in these areas. Future neuroimaging experiments could help to characterise the brain response changes associated with these treatments and help devise them.

  16. Binocular stereo-navigation for three-dimensional thoracoscopic lung resection.

    Science.gov (United States)

    Kanzaki, Masato; Isaka, Tamami; Kikkawa, Takuma; Sakamoto, Kei; Yoshiya, Takehito; Mitsuboshi, Shota; Oyama, Kunihiro; Murasugi, Masahide; Onuki, Takamasa

    2015-05-08

    This study investigated the efficacy of binocular stereo-navigation during three-dimensional (3-D) thoracoscopic sublobar resection (TSLR). From July 2001, the authors' department began to use a virtual 3-D pulmonary model on a personal computer (PC) for preoperative simulation before thoracoscopic lung resection and for intraoperative navigation during operation. From 120 of 1-mm thin-sliced high-resolution computed tomography (HRCT)-scan images of tumor and hilum, homemade software CTTRY allowed sugeons to mark pulmonary arteries, veins, bronchi, and tumor on the HRCT images manually. The location and thickness of pulmonary vessels and bronchi were rendered as diverse size cylinders. With the resulting numerical data, a 3-D image was reconstructed by Metasequoia shareware. Subsequently, the data of reconstructed 3-D images were converted to Autodesk data, which appeared on a stereoscopic-vision display. Surgeons wearing 3-D polarized glasses performed 3-D TSLR. The patients consisted of 5 men and 5 women, ranging in age from 65 to 84 years. The clinical diagnoses were a primary lung cancer in 6 cases and a solitary metastatic lung tumor in 4 cases. Eight single segmentectomies, one bi-segmentectomy, and one bi-subsegmentectomy were performed. Hilar lymphadenectomy with mediastinal lymph node sampling has been performed in 6 primary lung cancers, but four patients with metastatic lung tumors were performed without lymphadenectomy. The operation time and estimated blood loss ranged from 125 to 333 min and from 5 to 187 g, respectively. There were no intraoperative complications and no conversion to open thoracotomy and lobectomy. Postoperative courses of eight patients were uneventful, and another two patients had a prolonged lung air leak. The drainage duration and hospital stay ranged from 2 to 13 days and from 8 to 19 days, respectively. The tumor histology of primary lung cancer showed 5 adenocarcinoma and 1 squamous cell carcinoma. All primary lung

  17. Naturalistic depth perception and binocular vision

    OpenAIRE

    Maiello, G.

    2017-01-01

    Humans continuously move both their eyes to redirect their foveae to objects at new depths. To correctly execute these complex combinations of saccades, vergence eye movements and accommodation changes, the visual system makes use of multiple sources of depth information, including binocular disparity and defocus. Furthermore, during development, both fine-tuning of oculomotor control as well as correct eye growth are likely driven by complex interactions between eye movements, accommodation,...

  18. Refractive and binocular vision status of optometry students, Ghana ...

    African Journals Online (AJOL)

    To investigate the refractive and non-strabismic binocular vision status of Optometry students in University of Cape Coast, Ghana and to establish any associations between these conditions. A cross sectional study of 105 Optometry students were taken through a comprehensive optometric examination to investigate the ...

  19. AN AUTONOMOUS GPS-DENIED UNMANNED VEHICLE PLATFORM BASED ON BINOCULAR VISION FOR PLANETARY EXPLORATION

    Directory of Open Access Journals (Sweden)

    M. Qin

    2018-04-01

    Full Text Available Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching based VO (Visual Odometry software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.

  20. An Autonomous Gps-Denied Unmanned Vehicle Platform Based on Binocular Vision for Planetary Exploration

    Science.gov (United States)

    Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.

    2018-04-01

    Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.

  1. Symptomatology associated with accommodative and binocular vision anomalies.

    Science.gov (United States)

    García-Muñoz, Ángel; Carbonell-Bonete, Stela; Cacho-Martínez, Pilar

    2014-01-01

    To determine the symptoms associated with accommodative and non-strabismic binocular dysfunctions and to assess the methods used to obtain the subjects' symptoms. We conducted a scoping review of articles published between 1988 and 2012 that analysed any aspect of the symptomatology associated with accommodative and non-strabismic binocular dysfunctions. The literature search was performed in Medline (PubMed), CINAHL, PsycINFO and FRANCIS. A total of 657 articles were identified, and 56 met the inclusion criteria. We found 267 different ways of naming the symptoms related to these anomalies, which we grouped into 34 symptom categories. Of the 56 studies, 35 employed questionnaires and 21 obtained the symptoms from clinical histories. We found 11 questionnaires, of which only 3 had been validated: the convergence insufficiency symptom survey (CISS V-15) and CIRS parent version, both specific for convergence insufficiency, and the Conlon survey, developed for visual anomalies in general. The most widely used questionnaire (21 studies) was the CISS V-15. Of the 34 categories of symptoms, the most frequently mentioned were: headache, blurred vision, diplopia, visual fatigue, and movement or flicker of words at near vision, which were fundamentally related to near vision and binocular anomalies. There is a wide disparity of symptoms related to accommodative and binocular dysfunctions in the scientific literature, most of which are associated with near vision and binocular dysfunctions. The only psychometrically validated questionnaires that we found (n=3) were related to convergence insufficiency and to visual dysfunctions in general and there no specific questionnaires for other anomalies. Copyright © 2014. Published by Elsevier Espana.

  2. Fisiologia da visão binocular Physiology of binocular vision

    Directory of Open Access Journals (Sweden)

    Harley E. A. Bicas

    2004-02-01

    Full Text Available A visão binocular de seres humanos resulta da superposição quase completa dos campos visuais de cada olho, o que suscita discriminação perceptual de localizações espaciais de objetos relativamente ao observador (localização egocêntrica bem mais fina (estereopsia, mas isso ocorre em, apenas, uma faixa muito estreita (o horóptero. Aquém e além dela, acham-se presentes diplopia e confusão, sendo necessária supressão fisiológica (cortical para evitá-las. Analisa-se a geometria do horóptero e suas implicações fisiológicas (o desvio de Hillebrand, a partição de Kundt, a área de Panum, assim como aspectos clínicos da visão binocular normal (percepção simultânea, fusão, visão estereoscópica e de adaptações a seus estados afetados (supressão patológica, ambliopia, correspondência visual anômala.The binocular vision of human beings is given by the almost complete superimposition of the monocular visual fields, which allows a finer perceptual discrimination of the egocentric localization of objects in space (stereopsis but only within a very narrow band (the horopter. Before and beyond it, diplopia and confusion are present, so that a physiologic (cortical suppression is necessary to avoid them to become conscious. The geometry of the horopter and its physiologic implications (Hillebrand's deviation, Kundt's partition, Panum's area, stereoscopic vision are analyzed, as well as some clinical aspects of the normal binocular vision (simultaneous perception, fusion, stereoscopic vision and of adaptations to abnormal states (pathologic suppression, amblyopia, abnormal retinal correspondence.

  3. The disparate histories of binocular vision and binaural hearing.

    Science.gov (United States)

    Wade, Nicholas J

    2018-01-01

    Vision and hearing are dependent on disparities of spatial patterns received by two eyes and on time and intensity differences to two ears. However, the experiences of a single world have masked attention to these disparities. While eyes and ears are paired, there has not been parity in the attention directed to their functioning. Phenomena involving binocular vision were commented upon since antiquity whereas those about binaural hearing are much more recent. This history is compared with respect to the experimental manipulations of dichoptic and dichotic stimuli and the instruments used to stimulate the paired organs. Binocular color mixing led to studies of binaural hearing and direction and distance in visual localization were analyzed before those for auditory localization. Experimental investigations began in the nineteenth century with the invention of instruments like the stereoscope and pseudoscope, soon to be followed by their binaural equivalents, the stethophone and pseudophone.

  4. Stereo vision based automated grasp planning

    International Nuclear Information System (INIS)

    Wilhelmsen, K.; Huber, L.; Silva, D.; Grasz, E.; Cadapan, L.

    1995-02-01

    The Department of Energy has a need for treating existing nuclear waste. Hazardous waste stored in old warehouses needs to be sorted and treated to meet environmental regulations. Lawrence Livermore National Laboratory is currently experimenting with automated manipulations of unknown objects for sorting, treating, and detailed inspection. To accomplish these tasks, three existing technologies were expanded to meet the increasing requirements. First, a binocular vision range sensor was combined with a surface modeling system to make virtual images of unknown objects. Then, using the surface model information, stable grasp of the unknown shaped objects were planned algorithmically utilizing a limited set of robotic grippers. This paper is an expansion of previous work and will discuss the grasp planning algorithm

  5. Clustered features for use in stereo vision SLAM

    CSIR Research Space (South Africa)

    Joubert, D

    2010-07-01

    Full Text Available SLAM, or simultaneous localization and mapping, is a key component in the development of truly independent robots. Vision-based SLAM utilising stereo vision is a promising approach to SLAM but it is computationally expensive and difficult...

  6. Recovering stereo vision by squashing virtual bugs in a virtual reality environment.

    Science.gov (United States)

    Vedamurthy, Indu; Knill, David C; Huang, Samuel J; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne; Levi, Dennis M

    2016-06-19

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Author(s).

  7. Recovering stereo vision by squashing virtual bugs in a virtual reality environment

    Science.gov (United States)

    Vedamurthy, Indu; Knill, David C.; Huang, Samuel J.; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne

    2016-01-01

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity—the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task—a ‘bug squashing’ game—in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269607

  8. Efficacy of vision therapy in children with learning disability and associated binocular vision anomalies.

    Science.gov (United States)

    Hussaindeen, Jameel Rizwana; Shah, Prerana; Ramani, Krishna Kumar; Ramanujan, Lalitha

    To report the frequency of binocular vision (BV) anomalies in children with specific learning disorders (SLD) and to assess the efficacy of vision therapy (VT) in children with a non-strabismic binocular vision anomaly (NSBVA). The study was carried out at a centre for learning disability (LD). Comprehensive eye examination and binocular vision assessment was carried out for 94 children (mean (SD) age: 15 (2.2) years) diagnosed with specific learning disorder. BV assessment was done for children with best corrected visual acuity of ≥6/9 - N6, cooperative for examination and free from any ocular pathology. For children with a diagnosis of NSBVA (n=46), 24 children were randomized to VT and no intervention was provided to the other 22 children who served as experimental controls. At the end of 10 sessions of vision therapy, BV assessment was performed for both the intervention and non-intervention groups. Binocular vision anomalies were found in 59 children (62.8%) among which 22% (n=13) had strabismic binocular vision anomalies (SBVA) and 78% (n=46) had a NSBVA. Accommodative infacility (AIF) was the commonest of the NSBVA and found in 67%, followed by convergence insufficiency (CI) in 25%. Post-vision therapy, the intervention group showed significant improvement in all the BV parameters (Wilcoxon signed rank test, p<0.05) except negative fusional vergence. Children with specific learning disorders have a high frequency of binocular vision disorders and vision therapy plays a significant role in improving the BV parameters. Children with SLD should be screened for BV anomalies as it could potentially be an added hindrance to the reading difficulty in this special population. Copyright © 2017 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  9. Efficacy of vision therapy in children with learning disability and associated binocular vision anomalies

    Directory of Open Access Journals (Sweden)

    Jameel Rizwana Hussaindeen

    2018-01-01

    Conclusion: Children with specific learning disorders have a high frequency of binocular vision disorders and vision therapy plays a significant role in improving the BV parameters. Children with SLD should be screened for BV anomalies as it could potentially be an added hindrance to the reading difficulty in this special population.

  10. Computer-enhanced stereoscopic vision in a head-mounted operating binocular

    International Nuclear Information System (INIS)

    Birkfellner, Wolfgang; Figl, Michael; Matula, Christian; Hummel, Johann; Hanel, Rudolf; Imhof, Herwig; Wanschitz, Felix; Wagner, Arne; Watzinger, Franz; Bergmann, Helmar

    2003-01-01

    Based on the Varioscope, a commercially available head-mounted operating binocular, we have developed the Varioscope AR, a see through head-mounted display (HMD) for augmented reality visualization that seamlessly fits into the infrastructure of a surgical navigation system. We have assessed the extent to which stereoscopic visualization improves target localization in computer-aided surgery in a phantom study. In order to quantify the depth perception of a user aiming at a given target, we have designed a phantom simulating typical clinical situations in skull base surgery. Sixteen steel spheres were fixed at the base of a bony skull, and several typical craniotomies were applied. After having taken CT scans, the skull was filled with opaque jelly in order to simulate brain tissue. The positions of the spheres were registered using VISIT, a system for computer-aided surgical navigation. Then attempts were made to locate the steel spheres with a bayonet probe through the craniotomies using VISIT and the Varioscope AR as a stereoscopic display device. Localization of targets 4 mm in diameter using stereoscopic vision and additional visual cues indicating target proximity had a success rate (defined as a first-trial hit rate) of 87.5%. Using monoscopic vision and target proximity indication, the success rate was found to be 66.6%. Omission of visual hints on reaching a target yielded a success rate of 79.2% in the stereo case and 56.25% with monoscopic vision. Time requirements for localizing all 16 targets ranged from 7.5 min (stereo, with proximity cues) to 10 min (mono, without proximity cues). Navigation error is primarily governed by the accuracy of registration in the navigation system, whereas the HMD does not appear to influence localization significantly. We conclude that stereo vision is a valuable tool in augmented reality guided interventions. (note)

  11. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    Science.gov (United States)

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  12. Bilateral symmetry in vision and influence of ocular surgical procedures on binocular vision: A topical review

    Directory of Open Access Journals (Sweden)

    Samuel Arba Mosquera

    2016-10-01

    Full Text Available We analyze the role of bilateral symmetry in enhancing binocular visual ability in human eyes, and further explore how efficiently bilateral symmetry is preserved in different ocular surgical procedures. The inclusion criterion for this review was strict relevance to the clinical questions under research. Enantiomorphism has been reported in lower order aberrations, higher order aberrations and cone directionality. When contrast differs in the two eyes, binocular acuity is better than monocular acuity of the eye that receives higher contrast. Anisometropia has an uncommon occurrence in large populations. Anisometropia seen in infancy and childhood is transitory and of little consequence for the visual acuity. Binocular summation of contrast signals declines with age, independent of inter-ocular differences. The symmetric associations between the right and left eye could be explained by the symmetry in pupil offset and visual axis which is always nasal in both eyes. Binocular summation mitigates poor visual performance under low luminance conditions and strong inter-ocular disparity detrimentally affects binocular summation. Considerable symmetry of response exists in fellow eyes of patients undergoing myopic PRK and LASIK, however the method to determine whether or not symmetry is maintained consist of comparing individual terms in a variety of ad hoc ways both before and after the refractive surgery, ignoring the fact that retinal image quality for any individual is based on the sum of all terms. The analysis of bilateral symmetry should be related to the patients’ binocular vision status. The role of aberrations in monocular and binocular vision needs further investigation.

  13. Rapid matching of stereo vision based on fringe projection profilometry

    Science.gov (United States)

    Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei

    2016-09-01

    As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.

  14. Origins of strabismus and loss of binocular vision

    Science.gov (United States)

    Bui Quoc, Emmanuel; Milleret, Chantal

    2014-01-01

    Strabismus is a frequent ocular disorder that develops early in life in humans. As a general rule, it is characterized by a misalignment of the visual axes which most often appears during the critical period of visual development. However other characteristics of strabismus may vary greatly among subjects, for example, being convergent or divergent, horizontal or vertical, with variable angles of deviation. Binocular vision may also vary greatly. Our main goal here is to develop the idea that such “polymorphy” reflects a wide variety in the possible origins of strabismus. We propose that strabismus must be considered as possibly resulting from abnormal genetic and/or acquired factors, anatomical and/or functional abnormalities, in the sensory and/or the motor systems, both peripherally and/or in the brain itself. We shall particularly develop the possible “central” origins of strabismus. Indeed, we are convinced that it is time now to open this “black box” in order to move forward. All of this will be developed on the basis of both presently available data in literature (including most recent data) and our own experience. Both data in biology and medicine will be referred to. Our conclusions will hopefully help ophthalmologists to better understand strabismus and to develop new therapeutic strategies in the future. Presently, physicians eliminate or limit the negative effects of such pathology both on the development of the visual system and visual perception through the use of optical correction and, in some cases, extraocular muscle surgery. To better circumscribe the problem of the origins of strabismus, including at a cerebral level, may improve its management, in particular with respect to binocular vision, through innovating tools by treating the pathology at the source. PMID:25309358

  15. An ancient explanation of presbyopia based on binocular vision.

    Science.gov (United States)

    Barbero, Sergio

    2014-06-01

    Presbyopia, understood as the age-related loss of ability to clearly see near objects, was known to ancient Greeks. However, few references to it can be found in ancient manuscripts. A relevant discussion on presbyopia appears in a book called Symposiacs written by Lucius Mestrius Plutarchus around 100 A.C. In this work, Plutarch provided four explanations of presbyopia, associated with different theories of vision. One of the explanations is particularly interesting as it is based on a binocular theory of vision. In this theory, vision is produced when visual rays, emanating from the eyes, form visual cones that impinge on the objects to be seen. Visual rays coming from old people's eyes, it was supposed, are weaker than those from younger people's eyes; so the theory, to be logically coherent, implies that this effect is compensated by the increase in light intensity due to the overlapping, at a certain distance, of the visual cones coming from both eyes. Thus, it benefits the reader to move the reading text further away from the eyes in order to increase the fusion area of both visual cones. The historical hypothesis taking into consideration that the astronomer Hipparchus of Nicaea was the source of Plutarch's explanation of the theory is discussed. © 2013 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  16. Development of 3D online contact measurement system for intelligent manufacturing based on stereo vision

    Science.gov (United States)

    Li, Peng; Chong, Wenyan; Ma, Yongjun

    2017-10-01

    In order to avoid shortcomings of low efficiency and restricted measuring range exsited in traditional 3D on-line contact measurement method for workpiece size, the development of a novel 3D contact measurement system is introduced, which is designed for intelligent manufacturing based on stereo vision. The developed contact measurement system is characterized with an intergarted use of a handy probe, a binocular stereo vision system, and advanced measurement software.The handy probe consists of six track markers, a touch probe and the associated elcetronics. In the process of contact measurement, the hand probe can be located by the use of the stereo vision system and track markers, and 3D coordinates of a space point on the workpiece can be mearsured by calculating the tip position of a touch probe. With the flexibility of the hand probe, the orientation, range, density of the 3D contact measurenent can be adptable to different needs. Applications of the developed contact measurement system to high-precision measurement and rapid surface digitization are experimentally demonstrated.

  17. An obstacle detection system using binocular stereo fisheye lenses for planetary rover navigation

    Science.gov (United States)

    Liu, L.; Jia, J.; Li, L.

    In this paper we present an implementation of an obstacle detection system using binocular stereo fisheye lenses for planetary rover navigation The fisheye lenses can improve image acquisition efficiency and handle minimal clearance recovery problem because they provide a large field of view However the fisheye lens introduces significant distortion in the image and this will make it much more difficult to find a one-to-one correspondence In addition we have to improve the system accuracy and efficiency for robot navigation To compute dense depth maps accurately in real time the following five key issues are considered 1 using lookup tables for a tradeoff between time and space in fisheye distortion correction and correspondence matching 2 using an improved incremental calculation scheme for algorithmic optimization 3 multimedia instruction set MMX implementation 4 consistency check to remove wrong stereo matching problems suffering from occlusions or mismatches 5 constraints of the recovery space To realize obstacle detection robustly we use the following three steps 1 extracting the ground plane parameters using Randomized Hough Transform 2 filtering the ground and background 3 locating the obstacles by using connected region detection Experimental results show the system can run at 3 2fps in 2 0GHz PC with 640X480 pixels

  18. Two eyes, one vision: binocular motion perception in human visual cortex

    NARCIS (Netherlands)

    Barendregt, M.

    2016-01-01

    An important aspect of human vision is the fact that it is binocular, i.e. that we have two eyes. As a result, the brain nearly always receives two slightly different images of the same visual scene. Yet, we only perceive a single image and thus our brain has to actively combine the binocular visual

  19. Precise positioning method for multi-process connecting based on binocular vision

    Science.gov (United States)

    Liu, Wei; Ding, Lichao; Zhao, Kai; Li, Xiao; Wang, Ling; Jia, Zhenyuan

    2016-01-01

    With the rapid development of aviation and aerospace, the demand for metal coating parts such as antenna reflector, eddy-current sensor and signal transmitter, etc. is more and more urgent. Such parts with varied feature dimensions, complex three-dimensional structures, and high geometric accuracy are generally fabricated by the combination of different manufacturing technology. However, it is difficult to ensure the machining precision because of the connection error between different processing methods. Therefore, a precise positioning method is proposed based on binocular micro stereo vision in this paper. Firstly, a novel and efficient camera calibration method for stereoscopic microscope is presented to solve the problems of narrow view field, small depth of focus and too many nonlinear distortions. Secondly, the extraction algorithms for law curve and free curve are given, and the spatial position relationship between the micro vision system and the machining system is determined accurately. Thirdly, a precise positioning system based on micro stereovision is set up and then embedded in a CNC machining experiment platform. Finally, the verification experiment of the positioning accuracy is conducted and the experimental results indicated that the average errors of the proposed method in the X and Y directions are 2.250 μm and 1.777 μm, respectively.

  20. Stereo Vision for Unrestricted Human-Computer Interaction

    OpenAIRE

    Eldridge, Ross; Rudolph, Heiko

    2008-01-01

    Human computer interfaces have come long way in recent years, but the goal of a computer interpreting unrestricted human movement remains elusive. The use of stereo vision in this field has enabled the development of systems that begin to approach this goal. As computer technology advances we come ever closer to a system that can react to the ambiguities of human movement in real-time. In the foreseeable future stereo computer vision is not likely to replace the keyboard or mouse. There is at...

  1. Vision in avian emberizid foragers: maximizing both binocular vision and fronto-lateral visual acuity.

    Science.gov (United States)

    Moore, Bret A; Pita, Diana; Tyrrell, Luke P; Fernández-Juricic, Esteban

    2015-05-01

    Avian species vary in their visual system configuration, but previous studies have often compared single visual traits between two to three distantly related species. However, birds use different visual dimensions that cannot be maximized simultaneously to meet different perceptual demands, potentially leading to trade-offs between visual traits. We studied the degree of inter-specific variation in multiple visual traits related to foraging and anti-predator behaviors in nine species of closely related emberizid sparrows, controlling for phylogenetic effects. Emberizid sparrows maximize binocular vision, even seeing their bill tips in some eye positions, which may enhance the detection of prey and facilitate food handling. Sparrows have a single retinal center of acute vision (i.e. fovea) projecting fronto-laterally (but not into the binocular field). The foveal projection close to the edge of the binocular field may shorten the time to gather and process both monocular and binocular visual information from the foraging substrate. Contrary to previous work, we found that species with larger visual fields had higher visual acuity, which may compensate for larger blind spots (i.e. pectens) above the center of acute vision, enhancing predator detection. Finally, species with a steeper change in ganglion cell density across the retina had higher eye movement amplitude, probably due to a more pronounced reduction in visual resolution away from the fovea, which would need to be moved around more frequently. The visual configuration of emberizid passive prey foragers is substantially different from that of previously studied avian groups (e.g. sit-and-wait and tactile foragers). © 2015. Published by The Company of Biologists Ltd.

  2. Viewing geometry determines the contribution of binocular vision to the online control of grasping.

    Science.gov (United States)

    Keefe, Bruce D; Watt, Simon J

    2017-12-01

    Binocular vision is often assumed to make a specific, critical contribution to online visual control of grasping by providing precise information about the separation between digits and object. This account overlooks the 'viewing geometry' typically encountered in grasping, however. Separation of hand and object is rarely aligned precisely with the line of sight (the visual depth dimension), and analysis of the raw signals suggests that, for most other viewing angles, binocular feedback is less precise than monocular feedback. Thus, online grasp control relying selectively on binocular feedback would not be robust to natural changes in viewing geometry. Alternatively, sensory integration theory suggests that different signals contribute according to their relative precision, in which case the role of binocular feedback should depend on viewing geometry, rather than being 'hard-wired'. We manipulated viewing geometry, and assessed the role of binocular feedback by measuring the effects on grasping of occluding one eye at movement onset. Loss of binocular feedback resulted in a significantly less extended final slow-movement phase when hand and object were separated primarily in the frontoparallel plane (where binocular information is relatively imprecise), compared to when they were separated primarily along the line of sight (where binocular information is relatively precise). Consistent with sensory integration theory, this suggests the role of binocular (and monocular) vision in online grasp control is not a fixed, 'architectural' property of the visuo-motor system, but arises instead from the interaction of viewer and situation, allowing robust online control across natural variations in viewing geometry.

  3. A Photometric Stereo Using Re-Projected Images for Active Stereo Vision System

    Directory of Open Access Journals (Sweden)

    Keonhwa Jung

    2017-10-01

    Full Text Available In optical 3D shape measurement, stereo vision with structured light can measure 3D scan data with high accuracy and is used in many applications, but fine surface detail is difficult to obtain. On the other hand, photometric stereo can capture surface details but has disadvantages, in that its 3D data accuracy drops and it requires multiple light sources. When the two measurement methods are combined, more accurate 3D scan data and detailed surface features can be obtained at the same time. In this paper, we present a 3D optical measurement technique that uses re-projection of images to implement photometric stereo without an external light source. 3D scan data is enhanced by combining normal vector from this photometric stereo method, and the result is evaluated with the ground truth.

  4. A comparative study of fast dense stereo vision algorithms

    NARCIS (Netherlands)

    Sunyoto, H.; Mark, W. van der; Gavrila, D.M.

    2004-01-01

    With recent hardware advances, real-time dense stereo vision becomes increasingly feasible for general-purpose processors. This has important benefits for the intelligent vehicles domain, alleviating object segmentation problems when sensing complex, cluttered traffic scenes. In this paper, we

  5. Monocular and binocular development in children with albinism, infantile nystagmus syndrome, and normal vision.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke

    2013-12-01

    To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Interocular acuity differences and binocular summation ratios were compared between groups. Crowding ratios were calculated by dividing the single Landolt C decimal acuity with the crowded Landolt C decimal acuity mono- and binocularly. A linear regression analysis was conducted to investigate the contribution of 5 predictors to the monocular and binocular crowding ratio: nystagmus amplitude, nystagmus frequency, strabismus, astigmatism, and anisometropia. Crowding ratios were higher under mono- and binocular viewing conditions for children with infantile nystagmus syndrome than for children with normal vision. Children with albinism showed higher crowding ratios in their poorer eye and under binocular viewing conditions than children with normal vision. Children with albinism and children with infantile nystagmus syndrome showed larger interocular acuity differences than children with normal vision (0.1 logMAR in our clinical groups and 0.0 logMAR in children with normal vision). Binocular summation ratios did not differ between groups. Strabismus and nystagmus amplitude predicted the crowding ratio in the poorer eye (p = 0.015 and p = 0.005, respectively). The crowding ratio in the better eye showed a marginally significant relation with nystagmus frequency and depth of anisometropia (p = 0.082 and p = 0.070, respectively). The binocular crowding ratio was not predicted by any of the variables. Children with albinism and children with infantile nystagmus syndrome show larger interocular acuity differences than children with normal vision. Strabismus and nystagmus amplitude are significant predictors of the crowding ratio in the poorer eye.

  6. Calibration of Binocular Vision Sensors Based on Unknown-Sized Elliptical Stripe Images

    Directory of Open Access Journals (Sweden)

    Zhen Liu

    2017-12-01

    Full Text Available Most of the existing calibration methods for binocular stereo vision sensor (BSVS depend on a high-accuracy target with feature points that are difficult and costly to manufacture and. In complex light conditions, optical filters are used for BSVS, but they affect imaging quality. Hence, the use of a high-accuracy target with certain-sized feature points for calibration is not feasible under such complex conditions. To solve these problems, a calibration method based on unknown-sized elliptical stripe images is proposed. With known intrinsic parameters, the proposed method adopts the elliptical stripes located on the parallel planes as a medium to calibrate BSVS online. In comparison with the common calibration methods, the proposed method avoids utilizing high-accuracy target with certain-sized feature points. Therefore, the proposed method is not only easy to implement but is a realistic method for the calibration of BSVS with optical filter. Changing the size of elliptical curves projected on the target solves the difficulty of applying the proposed method in different fields of view and distances. Simulative and physical experiments are conducted to validate the efficiency of the proposed method. When the field of view is approximately 400 mm × 300 mm, the proposed method can reach a calibration accuracy of 0.03 mm, which is comparable with that of Zhang’s method.

  7. Neuroimaging of amblyopia and binocular vision: a review

    OpenAIRE

    Joly, Olivier; Frankó, Edit

    2014-01-01

    Amblyopia is a cerebral visual impairment considered to derive from abnormal visual experience (e.g., strabismus, anisometropia). Amblyopia, first considered as a monocular disorder, is now often seen as a primarily binocular disorder resulting in more and more studies examining the binocular deficits in the patients. The neural mechanisms of amblyopia are not completely understood even though they have been investigated with electrophysiological recordings in animal models and more recently ...

  8. Vision based flight procedure stereo display system

    Science.gov (United States)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  9. Health-related quality of life and binocular vision in patients with diplopia in acute-onset comitant esotropia with press-on prism improves

    Directory of Open Access Journals (Sweden)

    Qian Wang

    2017-11-01

    Full Text Available AIM: To assess the effect of press-on prisms in patients with acute-onset comitant esotropia and diplopia, focusing primarily on vision-related quality of life and binocular vision. METHODS: Retrospective case-series study. Totally 16 acute-onset comitant esotropia patients with diplopia who received treatment in the Huzhou Central Hospital were included in this study from March 2014 to March 2017. Vision-related quality of life before press-on prism correction and 1mo after press-on prism correction were performed with the Chinese version of the 25-item National Eye Institute Visual Functioning Questionnaire(CHI-NEI-VFQ-25. In each time of follow-up, we made a minute examination, includes worth four dot test and stereo tests. Data was statistically analyzed with paired sample t test, Chi-square test and Fisher's exact test. RESULTS: Except the degree of eye pain, color vision and perimetry, the indicators from CHI-NEI-VFQ-25 table including general health status, overall vision, mental health, social role difficulties, social functional, near activities, distant activities, independency and driving of acute-onset comitant esotropia patients with diplopia were obviously significant improved 1mo after press-on prism correction(PPCONCLUSION: Press-on prism correction may be helpful for binocular vision recovery in acute-onset comitant esotropia patients with diplopia, so as improve the vision-related quality of life.

  10. Research on robot navigation vision sensor based on grating projection stereo vision

    Science.gov (United States)

    Zhang, Xiaoling; Luo, Yinsheng; Lin, Yuchi; Zhu, Lei

    2016-10-01

    A novel visual navigation method based on grating projection stereo vision for mobile robot in dark environment is proposed. This method is combining with grating projection profilometry of plane structured light and stereo vision technology. It can be employed to realize obstacle detection, SLAM (Simultaneous Localization and Mapping) and vision odometry for mobile robot navigation in dark environment without the image match in stereo vision technology and without phase unwrapping in the grating projection profilometry. First, we research the new vision sensor theoretical, and build geometric and mathematical model of the grating projection stereo vision system. Second, the computational method of 3D coordinates of space obstacle in the robot's visual field is studied, and then the obstacles in the field is located accurately. The result of simulation experiment and analysis shows that this research is useful to break the current autonomous navigation problem of mobile robot in dark environment, and to provide the theoretical basis and exploration direction for further study on navigation of space exploring robot in the dark and without GPS environment.

  11. Multiple Moving Obstacles Avoidance of Service Robot using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Achmad Jazidie

    2011-12-01

    Full Text Available In this paper, we propose a multiple moving obstacles avoidance using stereo vision for service robots in indoor environments. We assume that this model of service robot is used to deliver a cup to the recognized customer from the starting point to the destination. The contribution of this research is a new method for multiple moving obstacle avoidance with Bayesian approach using stereo camera. We have developed and introduced 3 main modules to recognize faces, to identify multiple moving obstacles and to maneuver of robot. A group of people who is walking will be tracked as a multiple moving obstacle, and the speed, direction, and distance of the moving obstacles is estimated by a stereo camera in order that the robot can maneuver to avoid the collision. To overcome the inaccuracies of vision sensor, Bayesian approach is used for estimate the absense and direction of obstacles. We present the results of the experiment of the service robot called Srikandi III which uses our proposed method and we also evaluate its performance. Experiments shown that our proposed method working well, and Bayesian approach proved increasing the estimation perform for absence and direction of moving obstacle.

  12. Dynamic Trajectory Extraction from Stereo Vision Using Fuzzy Clustering

    Science.gov (United States)

    Onishi, Masaki; Yoda, Ikushi

    In recent years, many human tracking researches have been proposed in order to analyze human dynamic trajectory. These researches are general technology applicable to various fields, such as customer purchase analysis in a shopping environment and safety control in a (railroad) crossing. In this paper, we present a new approach for tracking human positions by stereo image. We use the framework of two-stepped clustering with k-means method and fuzzy clustering to detect human regions. In the initial clustering, k-means method makes middle clusters from objective features extracted by stereo vision at high speed. In the last clustering, c-means fuzzy method cluster middle clusters based on attributes into human regions. Our proposed method can be correctly clustered by expressing ambiguity using fuzzy clustering, even when many people are close to each other. The validity of our technique was evaluated with the experiment of trajectories extraction of doctors and nurses in an emergency room of a hospital.

  13. Prevalence of non-strabismic anomalies of binocular vision in Tamil Nadu: report 2 of BAND study.

    Science.gov (United States)

    Hussaindeen, Jameel Rizwana; Rakshit, Archayeeta; Singh, Neeraj Kumar; George, Ronnie; Swaminathan, Meenakshi; Kapur, Suman; Scheiman, Mitchell; Ramani, Krishna Kumar

    2017-11-01

    Population-based studies on the prevalence of non-strabismic anomalies of binocular vision in ethnic Indians are more than two decades old. Based on indigenous normative data, the BAND (Binocular Vision Anomalies and Normative Data) study aims to report the prevalence of non-strabismic anomalies of binocular vision among school children in rural and urban Tamil Nadu. This population-based, cross-sectional study was designed to estimate the prevalence of non-strabismic anomalies of binocular vision in the rural and urban population of Tamil Nadu. In four schools, two each in rural and urban arms, 920 children in the age range of seven to 17 years were included in the study. Comprehensive binocular vision assessment was done for all children including evaluation of vergence and accommodative systems. In the first phase of the study, normative data of parameters of binocular vision were assessed followed by prevalence estimates of non-strabismic anomalies of binocular vision. The mean and standard deviation of the age of the sample were 12.7 ± 2.7 years. The prevalence of non-strabismic anomalies of binocular vision in the urban and rural arms was found to be 31.5 and 29.6 per cent, respectively. Convergence insufficiency was the most prevalent (16.5 and 17.6 per cent in the urban and rural arms, respectively) among all the types of non-strabismic anomalies of binocular vision. There was no gender predilection and no statistically significant differences were observed between the rural and urban arms in the prevalence of non-strabismic anomalies of binocular vision (Z-test, p > 0.05). The prevalence of non-strabismic anomalies of binocular vision was found to be higher in the 13 to 17 years age group (36.2 per cent) compared to seven to 12 years (25.1 per cent) (Z-test, p < 0.05). Non-strabismic binocular vision anomalies are highly prevalent among school children and the prevalence increases with age. With increasing near visual demands in the higher

  14. Monocular and binocular development in children with albinism, infantile nystagmus syndrome and normal vision

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.

    2013-01-01

    Background/aims: To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Methods: Interocular acuity differences and

  15. Monocular and binocular development in children with albinism, infantile nystagmus syndrome, and normal vision

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.

    2013-01-01

    Abstract Background/aims: To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Methods: Interocular acuity

  16. Visual comfort of binocular and 3D displays

    NARCIS (Netherlands)

    Kooi, F.L.; Toet, A.

    2004-01-01

    Imperfections in binocular image pairs can cause serious viewing discomfort. For example, in stereo vision systems eye strain is caused by unintentional mismatches between the left and right eye images (stereo imperfections). Head-mounted displays can induce eye strain due to optical misalignments.

  17. Robotic Arm Control Algorithm Based on Stereo Vision Using RoboRealm Vision

    Directory of Open Access Journals (Sweden)

    SZABO, R.

    2015-05-01

    Full Text Available The goal of this paper is to present a stereo computer vision algorithm intended to control a robotic arm. Specific points on the robot joints are marked and recognized in the software. Using a dedicated set of mathematic equations, the movement of the robot is continuously computed and monitored with webcams. Positioning error is finally analyzed.

  18. Loss of binocular vision as direct cause for misrouting of temporal retinal fibers in albinism.

    Science.gov (United States)

    Banihani, Saleh M

    2015-10-01

    In humans, the nasal retina projects to the contralateral hemisphere, whereas the temporal retina projects ipsilaterally. The nasotemporal line that divides the retina into crossed and uncrossed parts coincides with the vertical meridian through the fovea. This normal projection of the retina is severely altered in albinism, in which the nasotemporal line shifted into the temporal retina with temporal retinal fibers cross the midline at the optic chiasm. This study proposes the loss of binocular vision as direct cause for misrouting of temporal retinal fibers and shifting of the nasotemporal line temporally in albinism. It is supported by many observations that clearly indicate that loss of binocular vision causes uncrossed retinal fibers to cross the midline. This hypothesis may alert scientists and clinicians to find ways to prevent or minimize the loss of binocular vision that may occur in some diseases such as albinism and early squint. Hopefully, this will minimize the misrouting of temporal fibers and improve vision in such diseases. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Research on detection method of UAV obstruction based on binocular vision

    Science.gov (United States)

    Zhu, Xiongwei; Lei, Xusheng; Sui, Zhehao

    2018-04-01

    For the autonomous obstacle positioning and ranging in the process of UAV (unmanned aerial vehicle) flight, a system based on binocular vision is constructed. A three-stage image preprocessing method is proposed to solve the problem of the noise and brightness difference in the actual captured image. The distance of the nearest obstacle is calculated by using the disparity map that generated by binocular vision. Then the contour of the obstacle is extracted by post-processing of the disparity map, and a color-based adaptive parameter adjustment algorithm is designed to extract contours of obstacle automatically. Finally, the safety distance measurement and obstacle positioning during the UAV flight process are achieved. Based on a series of tests, the error of distance measurement can keep within 2.24% of the measuring range from 5 m to 20 m.

  20. Touch interacts with vision during binocular rivalry with a tight orientation tuning.

    Directory of Open Access Journals (Sweden)

    Claudia Lunghi

    Full Text Available Multisensory integration is a common feature of the mammalian brain that allows it to deal more efficiently with the ambiguity of sensory input by combining complementary signals from several sensory sources. Growing evidence suggests that multisensory interactions can occur as early as primary sensory cortices. Here we present incompatible visual signals (orthogonal gratings to each eye to create visual competition between monocular inputs in primary visual cortex where binocular combination would normally take place. The incompatibility prevents binocular fusion and triggers an ambiguous perceptual response in which the two images are perceived one at a time in an irregular alternation. One key function of multisensory integration is to minimize perceptual ambiguity by exploiting cross-sensory congruence. We show that a haptic signal matching one of the visual alternatives helps disambiguate visual perception during binocular rivalry by both prolonging the dominance period of the congruent visual stimulus and by shortening its suppression period. Importantly, this interaction is strictly tuned for orientation, with a mismatch as small as 7.5° between visual and haptic orientations sufficient to annul the interaction. These results indicate important conclusions: first, that vision and touch interact at early levels of visual processing where interocular conflicts are first detected and orientation tunings are narrow, and second, that haptic input can influence visual signals outside of visual awareness, bringing a stimulus made invisible by binocular rivalry suppression back to awareness sooner than would occur without congruent haptic input.

  1. A stereo vision-based obstacle detection system in vehicles

    Science.gov (United States)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  2. Avian binocular vision: It's not just about what birds can see, it's also about what they can't.

    Directory of Open Access Journals (Sweden)

    Luke P Tyrrell

    Full Text Available With the exception of primates, most vertebrates have laterally placed eyes. Binocular vision in vertebrates has been implicated in several functions, including depth perception, contrast discrimination, etc. However, the blind area in front of the head that is proximal to the binocular visual field is often neglected. This anterior blind area is important when discussing the evolution of binocular vision because its relative length is inversely correlated with the width of the binocular field. Therefore, species with wider binocular fields also have shorter anterior blind areas and objects along the mid-sagittal plane can be imaged at closer distances. Additionally, the anterior blind area is of functional significance for birds because the beak falls within this blind area. We tested for the first time some specific predictions about the functional role of the anterior blind area in birds controlling for phylogenetic effects. We used published data on visual field configuration in 40 species of birds and measured beak and skull parameters from museum specimens. We found that birds with proportionally longer beaks have longer anterior blind areas and thus narrower binocular fields. This result suggests that the anterior blind area and beak visibility do play a role in shaping binocular fields, and that binocular field width is not solely determined by the need for stereoscopic vision. In visually guided foragers, the ability to see the beak-and how much of the beak can be seen-varies predictably with foraging habits. For example, fish- and insect-eating specialists can see more of their own beak than birds eating immobile food can. But in non-visually guided foragers, there is no consistent relationship between the beak and anterior blind area. We discuss different strategies-wide binocular fields, large eye movements, and long beaks-that minimize the potential negative effects of the anterior blind area. Overall, we argue that there is more to

  3. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    Science.gov (United States)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  4. Recovering stereo vision by squashing virtual bugs in a virtual reality environment

    OpenAIRE

    Vedamurthy, I; Knill, DC; Huang, SJ; Yung, A; Ding, J; Kwon, OS; Bavelier, D; Levi, DM

    2016-01-01

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity—the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and reliesmostly onmonocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereodeficient owing to strabismus and/or amblyopia in a natural visuomotor task—a ‘bug squashing’ game—in a virtual...

  5. Age is highly associated with stereo blindness among surgeons

    DEFF Research Database (Denmark)

    Fergo, Charlotte; Burcharth, Jakob; Pommergaard, Hans-Christian

    2016-01-01

    BACKGROUND: The prevalence of stereo blindness in the general population varies greatly within a range of 1-30 %. Stereo vision adds an extra dimension to aid depth perception and gives a binocular advantage in task completion. Lack of depth perception may lower surgical performance, potentially...... and stereo tested by the use of the Random Dot E stereo test. Upon stereo testing, a demographic questionnaire was completed. Multivariate logistic regression analysis was employed to assess the association between stereo blindness and the variables resulting from the univariate analysis. RESULTS: Three...

  6. Ecomorphology of orbit orientation and the adaptive significance of binocular vision in primates and other mammals.

    Science.gov (United States)

    Heesy, Christopher P

    2008-01-01

    Primates are characterized by forward-facing, or convergent, orbits and associated binocular field overlap. Hypotheses explaining the adaptive significance of these traits often relate to ecological factors, such as arboreality, nocturnal visual predation, or saltatory locomotion in a complex nocturnal, arboreal environment. This study re-examines the ecological factors that are associated with high orbit convergence in mammals. Orbit orientation data were collected for 321 extant taxa from sixteen orders of metatherian (marsupial) and eutherian mammals. These taxa were coded for activity pattern, degree of faunivory, and substrate preference. Results demonstrate that nocturnal and cathemeral mammals have significantly more convergent orbits than diurnal taxa, both within and across orders. Faunivorous eutherians (both nocturnal and diurnal) have higher mean orbit convergence than opportunistically foraging or non-faunivorous taxa. However, substrate preference is not associated with higher orbit convergence and, by extension, greater binocular visual field overlap. These results are consistent with the hypothesis that mammalian predators evolved higher orbit convergence, binocular vision, and stereopsis to counter camouflage in prey inhabiting a nocturnal environment. Strepsirhine primates have a range of orbit convergence values similar to nocturnal or cathemeral predatory non-primate mammals. These data are entirely consistent with the nocturnal visual predation hypothesis of primate origins. (c) 2007 S. Karger AG, Basel.

  7. Indoor and Outdoor Depth Imaging of Leaves With Time-of-Flight and Stereo Vision Sensors

    DEFF Research Database (Denmark)

    Kazmi, Wajahat; Foix, Sergi; Alenya, Guilliem

    2014-01-01

    In this article we analyze the response of Time-of-Flight (ToF) cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. ToF cameras are sensitive to ambient light and have low resolution but deliver...... poorly under sunlight. Stereo vision is comparatively more robust to ambient illumination and provides high resolution depth data but is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves...

  8. Operational modal analysis on a VAWT in a large wind tunnel using stereo vision technique

    International Nuclear Information System (INIS)

    Najafi, Nadia; Paulsen, Uwe Schmidt

    2017-01-01

    This paper is about development and use of a research based stereo vision system for vibration and operational modal analysis on a parked, 1-kW, 3-bladed vertical axis wind turbine (VAWT), tested in a wind tunnel at high wind. Vibrations were explored experimentally by tracking small deflections of the markers on the structure with two cameras, and also numerically, to study structural vibrations in an overall objective to investigate challenges and to prove the capability of using stereo vision. Two high speed cameras provided displacement measurements at no wind speed interference. The displacement time series were obtained using a robust image processing algorithm and analyzed with data-driven stochastic subspace identification (DD-SSI) method. In addition of exploring structural behaviour, the VAWT testing gave us the possibility to study aerodynamic effects at Reynolds number of approximately 2 × 10"5. VAWT dynamics were simulated using HAWC2. The stereo vision results and HAWC2 simulations agree within 4% except for mode 3 and 4. The high aerodynamic damping of one of the blades, in flatwise motion, would explain the gap between those two modes from simulation and stereo vision. A set of conventional sensors, such as accelerometers and strain gauges, are also measuring rotor vibration during the experiment. The spectral analysis of the output signals of the conventional sensors agrees the stereo vision results within 4% except for mode 4 which is due to the inaccuracy of spectral analysis in picking very closely spaced modes. Finally, the uncertainty of the 3D displacement measurement was evaluated by applying a generalized method based on the law of error propagation, for a linear camera model of the stereo vision system. - Highlights: • The stereo vision technique is used to track deflections on a VAWT in the wind tunnel. • OMA is applied on displacement time series to study the dynamic behaviour of the VAWT. • Stereo vision results enabled us to

  9. A Real-Time Embedded System for Stereo Vision Preprocessing Using an FPGA

    DEFF Research Database (Denmark)

    Kjær-Nielsen, Anders; Jensen, Lars Baunegaard With; Sørensen, Anders Stengaard

    2008-01-01

    In this paper a low level vision processing node for use in existing IEEE 1394 camera setups is presented. The processing node is a small embedded system, that utilizes an FPGA to perform stereo vision preprocessing at rates limited by the bandwidth of IEEE 1394a (400Mbit). The system is used...

  10. Stereo Vision-Based High Dynamic Range Imaging Using Differently-Exposed Image Pair

    Directory of Open Access Journals (Sweden)

    Won-Jae Park

    2017-06-01

    Full Text Available In this paper, a high dynamic range (HDR imaging method based on the stereo vision system is presented. The proposed method uses differently exposed low dynamic range (LDR images captured from a stereo camera. The stereo LDR images are first converted to initial stereo HDR images using the inverse camera response function estimated from the LDR images. However, due to the limited dynamic range of the stereo LDR camera, the radiance values in under/over-exposed regions of the initial main-view (MV HDR image can be lost. To restore these radiance values, the proposed stereo matching and hole-filling algorithms are applied to the stereo HDR images. Specifically, the auxiliary-view (AV HDR image is warped by using the estimated disparity between initial the stereo HDR images and then effective hole-filling is applied to the warped AV HDR image. To reconstruct the final MV HDR, the warped and hole-filled AV HDR image is fused with the initial MV HDR image using the weight map. The experimental results demonstrate objectively and subjectively that the proposed stereo HDR imaging method provides better performance compared to the conventional method.

  11. The minimum test battery to screen for binocular vision anomalies: report 3 of the BAND study.

    Science.gov (United States)

    Hussaindeen, Jameel Rizwana; Rakshit, Archayeeta; Singh, Neeraj Kumar; Swaminathan, Meenakshi; George, Ronnie; Kapur, Suman; Scheiman, Mitchell; Ramani, Krishna Kumar

    2018-03-01

    This study aims to report the minimum test battery needed to screen non-strabismic binocular vision anomalies (NSBVAs) in a community set-up. When large numbers are to be screened we aim to identify the most useful test battery when there is no opportunity for a more comprehensive and time-consuming clinical examination. The prevalence estimates and normative data for binocular vision parameters were estimated from the Binocular Vision Anomalies and Normative Data (BAND) study, following which cut-off estimates and receiver operating characteristic curves to identify the minimum test battery have been plotted. In the receiver operating characteristic phase of the study, children between nine and 17 years of age were screened in two schools in the rural arm using the minimum test battery, and the prevalence estimates with the minimum test battery were found. Receiver operating characteristic analyses revealed that near point of convergence with penlight and red filter (> 7.5 cm), monocular accommodative facility ( 1.25 prism dioptres) were significant factors with cut-off values for best sensitivity and specificity. This minimum test battery was applied to a cohort of 305 children. The mean (standard deviation) age of the subjects was 12.7 (two) years with 121 males and 184 females. Using the minimum battery of tests obtained through the receiver operating characteristic analyses, the prevalence of NSBVAs was found to be 26 per cent. Near point of convergence with penlight and red filter > 10 cm was found to have the highest sensitivity (80 per cent) and specificity (73 per cent) for the diagnosis of convergence insufficiency. For the diagnosis of accommodative infacility, monocular accommodative facility with a cut-off of less than seven cycles per minute was the best predictor for screening (92 per cent sensitivity and 90 per cent specificity). The minimum test battery of near point of convergence with penlight and red filter, difference between distance and near

  12. Research and Development of Target Recognition and Location Crawling Platform based on Binocular Vision

    Science.gov (United States)

    Xu, Weidong; Lei, Zhu; Yuan, Zhang; Gao, Zhenqing

    2018-03-01

    The application of visual recognition technology in industrial robot crawling and placing operation is one of the key tasks in the field of robot research. In order to improve the efficiency and intelligence of the material sorting in the production line, especially to realize the sorting of the scattered items, the robot target recognition and positioning crawling platform based on binocular vision is researched and developed. The images were collected by binocular camera, and the images were pretreated. Harris operator was used to identify the corners of the images. The Canny operator was used to identify the images. Hough-chain code recognition was used to identify the images. The target image in the image, obtain the coordinates of each vertex of the image, calculate the spatial position and posture of the target item, and determine the information needed to capture the movement and transmit it to the robot control crawling operation. Finally, In this paper, we use this method to experiment the wrapping problem in the express sorting process The experimental results show that the platform can effectively solve the problem of sorting of loose parts, so as to achieve the purpose of efficient and intelligent sorting.

  13. Monocular zones in stereoscopic scenes: A useful source of information for human binocular vision?

    Science.gov (United States)

    Harris, Julie M.

    2010-02-01

    When an object is closer to an observer than the background, the small differences between right and left eye views are interpreted by the human brain as depth. This basic ability of the human visual system, called stereopsis, lies at the core of all binocular three-dimensional (3-D) perception and related technological display development. To achieve stereopsis, it is traditionally assumed that corresponding locations in the right and left eye's views must first be matched, then the relative differences between right and left eye locations are used to calculate depth. But this is not the whole story. At every object-background boundary, there are regions of the background that only one eye can see because, in the other eye's view, the foreground object occludes that region of background. Such monocular zones do not have a corresponding match in the other eye's view and can thus cause problems for depth extraction algorithms. In this paper I will discuss evidence, from our knowledge of human visual perception, illustrating that monocular zones do not pose problems for our human visual systems, rather, our visual systems can extract depth from such zones. I review the relevant human perception literature in this area, and show some recent data aimed at quantifying the perception of depth from monocular zones. The paper finishes with a discussion of the potential importance of considering monocular zones, for stereo display technology and depth compression algorithms.

  14. Operational modal analysis on a VAWT in a large wind tunnel using stereo vision technique

    DEFF Research Database (Denmark)

    Najafi, Nadia; Schmidt Paulsen, Uwe

    2017-01-01

    This paper is about development and use of a research based stereo vision system for vibration and operational modal analysis on a parked, 1-kW, 3-bladed vertical axis wind turbine (VAWT), tested in a wind tunnel at high wind. Vibrations were explored experimentally by tracking small deflections...... of the markers on the structure with two cameras, and also numerically, to study structural vibrations in an overall objective to investigate challenges and to prove the capability of using stereo vision. Two high speed cameras provided displacement measurements at no wind speed interference. The displacement...

  15. Experiments on mobile robot stereo vision system calibration under hardware imperfection

    Directory of Open Access Journals (Sweden)

    Safin Ramil

    2018-01-01

    Full Text Available Calibration is essential for any robot vision system for achieving high accuracy in deriving objects metric information. One of typical requirements for a stereo vison system in order to obtain better calibration results is to guarantee that both cameras keep the same vertical level. However, cameras may be displaced due to severe conditions of a robot operating or some other circumstances. This paper presents our experimental approach to the problem of a mobile robot stereo vision system calibration under a hardware imperfection. In our experiments, we used crawler-type mobile robot «Servosila Engineer». Stereo system cameras of the robot were displaced relative to each other, causing loss of surrounding environment information. We implemented and verified checkerboard and circle grid based calibration methods. The two methods comparison demonstrated that a circle grid based calibration should be preferred over a classical checkerboard calibration approach.

  16. Stereo Vision and 3D Reconstruction on a Distributed Memory System

    NARCIS (Netherlands)

    Kuijpers, N.H.L.; Paar, G.; Lukkien, J.J.

    1996-01-01

    An important research topic in image processing is stereo vision. The objective is to compute a 3-dimensional representation of some scenery from two 2-dimensional digital images. Constructing a 3-dimensional representation involves finding pairs of pixels from the two images which correspond to the

  17. Hardware-Efficient Design of Real-Time Profile Shape Matching Stereo Vision Algorithm on FPGA

    Directory of Open Access Journals (Sweden)

    Beau Tippetts

    2014-01-01

    Full Text Available A variety of platforms, such as micro-unmanned vehicles, are limited in the amount of computational hardware they can support due to weight and power constraints. An efficient stereo vision algorithm implemented on an FPGA would be able to minimize payload and power consumption in microunmanned vehicles, while providing 3D information and still leaving computational resources available for other processing tasks. This work presents a hardware design of the efficient profile shape matching stereo vision algorithm. Hardware resource usage is presented for the targeted micro-UV platform, Helio-copter, that uses the Xilinx Virtex 4 FX60 FPGA. Less than a fifth of the resources on this FGPA were used to produce dense disparity maps for image sizes up to 450 × 375, with the ability to scale up easily by increasing BRAM usage. A comparison is given of accuracy, speed performance, and resource usage of a census transform-based stereo vision FPGA implementation by Jin et al. Results show that the profile shape matching algorithm is an efficient real-time stereo vision algorithm for hardware implementation for resource limited systems such as microunmanned vehicles.

  18. Development and Application of the Stereo Vision Tracking System with Virtual Reality

    Directory of Open Access Journals (Sweden)

    Chia-Sui Wang

    2015-01-01

    Full Text Available A virtual reality (VR driver tracking verification system is created, of which the application to stereo image tracking and positioning accuracy is researched in depth. In the research, the feature that the stereo vision system has image depth is utilized to improve the error rate of image tracking and image measurement. In a VR scenario, the function collecting behavioral data of driver was tested. By means of VR, racing operation is simulated and environmental (special weathers such as raining and snowing and artificial (such as sudden crossing road by pedestrians, appearing of vehicles from dead angles, roadblock variables are added as the base for system implementation. In addition, the implementation is performed with human factors engineered according to sudden conditions that may happen easily in driving. From experimental results, it proves that the stereo vision system created by the research has an image depth recognition error rate within 0.011%. The image tracking error rate may be smaller than 2.5%. In the research, the image recognition function of stereo vision is utilized to accomplish the data collection of driver tracking detection. In addition, the environmental conditions of different simulated real scenarios may also be created through VR.

  19. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    Science.gov (United States)

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  20. Generalization of Figure-Ground Segmentation from Binocular to Monocular Vision in an Embodied Biological Brain Model

    Science.gov (United States)

    2011-08-01

    figure and ground the luminance cue breaks down and gestalt contours can fail to pop out. In this case we rely on color, which, having weak stereopsis...REPORT Generalization of Figure - Ground Segmentation from Monocular to Binocular Vision in an Embodied Biological Brain Model 14. ABSTRACT 16. SECURITY...U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS figure - ground , neural network, object

  1. The implementation of depth measurement and related algorithms based on binocular vision in embedded AM5728

    Science.gov (United States)

    Deng, Zhiwei; Li, Xicai; Shi, Junsheng; Huang, Xiaoqiao; Li, Feiyan

    2018-01-01

    Depth measurement is the most basic measurement in various machine vision, such as automatic driving, unmanned aerial vehicle (UAV), robot and so on. And it has a wide range of use. With the development of image processing technology and the improvement of hardware miniaturization and processing speed, real-time depth measurement using dual cameras has become a reality. In this paper, an embedded AM5728 and the ordinary low-cost dual camera is used as the hardware platform. The related algorithms of dual camera calibration, image matching and depth calculation have been studied and implemented on the hardware platform, and hardware design and the rationality of the related algorithms of the system are tested. The experimental results show that the system can realize simultaneous acquisition of binocular images, switching of left and right video sources, display of depth image and depth range. For images with a resolution of 640 × 480, the processing speed of the system can be up to 25 fps. The experimental results show that the optimal measurement range of the system is from 0.5 to 1.5 meter, and the relative error of the distance measurement is less than 5%. Compared with the PC, ARM11 and DMCU hardware platforms, the embedded AM5728 hardware is good at meeting real-time depth measurement requirements in ensuring the image resolution.

  2. Distribution of Binocular Vision Anomalies and Refractive Errors in Iranian Children With Learning Disabilities

    Directory of Open Access Journals (Sweden)

    Yekta

    2015-11-01

    Full Text Available Background Visual problems in children contribute to learning disorders, which are one of the most influential factors in learning. Objectives The aim of the present study was to determine the prevalence of refractive and binocular vision errors in children with learning disorders. Patients and Methods In this cross-sectional study, 406 children with learning disorders with a mean age of 8.56 ± 2.4 years were evaluated. Examinations included the determination of refractive errors with an auto-refractometer and static retinoscopy, measurement of visual acuity with a Snellen chart, evaluation of ocular deviation, and measurement of stereopsis, amplitude of accommodation, and near point of convergence. Results Of the 406 participants, 319 (78.6% were emmetropic in the right eye, 14.5% had myopia, and 6.9% had hyperopia according to cycloplegic refraction. Astigmatism was detected in 75 (18.5% children. In our study, 89.9% of the children had no deviation, 1.0% had esophoria, and 6.4% had exophoria . In addition, 2.2% of the children had suppression. The near point of convergence ranged from 3 to 18 cm, with a mean of 10.12 ± 3.274 cm. Moreover, 98.5 and 98.0% of the participants achieved complete vision with the best correction in the right and left eye, respectively. The best corrected visual acuity in the right and left eye was achieved in 98.5 and 98.0% of the children, respectively. Conclusions The pattern of visual impairment in learning-impaired children is not much different from that in normal children; however, because these children may not be able to express themselves clearly, lack of correct diagnosis and appropriate treatment has resulted in a marked defect in recognizing visual disorders in these children. Therefore, gaining knowledge of the prevalence of refractive errors in children with learning disorders can be considered the first step in their treatment.

  3. An Omnidirectional Stereo Vision-Based Smart Wheelchair

    Directory of Open Access Journals (Sweden)

    Yutaka Satoh

    2007-06-01

    Full Text Available To support safe self-movement of the disabled and the aged, we developed an electric wheelchair that realizes the functions of detecting both the potential hazards in a moving environment and the postures and gestures of a user by equipping an electric wheelchair with the stereo omnidirectional system (SOS, which is capable of acquiring omnidirectional color image sequences and range data simultaneously in real time. The first half of this paper introduces the SOS and the basic technology behind it. To use the multicamera system SOS on an electric wheelchair, we developed an image synthesizing method of high speed and high quality and the method of recovering SOS attitude changes by using attitude sensors is also introduced. This method allows the SOS to be used without being affected by the mounting attitude of the SOS. The second half of this paper introduces the prototype electric wheelchair actually manufactured and experiments conducted using the prototype. The usability of the electric wheelchair is also discussed.

  4. Systematic construction and control of stereo nerve vision network in intelligent manufacturing

    Science.gov (United States)

    Liu, Hua; Wang, Helong; Guo, Chunjie; Ding, Quanxin; Zhou, Liwei

    2017-10-01

    A system method of constructing stereo vision by using neural network is proposed, and the operation and control mechanism in actual operation are proposed. This method makes effective use of the neural network in learning and memory function, by after training with samples. Moreover, the neural network can learn the nonlinear relationship in the stereoscopic vision system and the internal and external orientation elements. These considerations are Worthy of attention, which includes limited constraints, the scientific of critical group, the operating speed and the operability in technical aspects. The results support our theoretical forecast.

  5. Binocular vision and abnormal head posture in children when watching television

    Directory of Open Access Journals (Sweden)

    Di Zhang

    2016-05-01

    Full Text Available AIM: To determine the association between the binocular vision and an abnormal head posture (AHP when watching television (TV in children 7-14y of age. METHODS: Fifty normal children in the normal group and 52 children with an AHP when watching TV in the AHP group were tested for spherical equivalents, far and near fusional convergence (FC and fusional divergence (FD amplitudes, near point of convergence, far and near heterophoria, accommodative convergence/ accommodation ratio and stereoacuity. The values of these tests were compared between the two groups. The independent t test was applied at a confidence level of 95%. RESULTS: The far and near FC amplitudes and far FD amplitudes were lower in the AHP group (the far FC amplitudes: break point 13.6±5.4△, recovery point 8.7±5.4△. The near FC amplitudes: break point 14.5±7.3△, recovery point 10.3±5.1△. The far FD amplitudes: break point 3.9±2.7△, recovery point 2.6±2.3△ compared with those in the normal group (the far FC amplitudes: break point 19.1±6.2△, recovery point 12.4±4.5△. The near FC amplitudes: break point 22.3±8.0△, recovery point 16.1±5.7△. The far FD amplitudes: break point 7.0±2.1△, recovery point 4.6±1.9△. Other tests presented no statistically significant differences. CONCLUSION: An association between the reduced FC and FD amplitudes and the AHP in children when watching TV is proposed in the study. This kind of AHP is considered to be an anomalous manifestation which appears in a part of puerile patients of fusional vergence dysfunction.

  6. Image registration algorithm for high-voltage electric power live line working robot based on binocular vision

    Science.gov (United States)

    Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping

    2017-12-01

    In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.

  7. Spatial-frequency dependent binocular imbalance in amblyopia.

    Science.gov (United States)

    Kwon, MiYoung; Wiecek, Emily; Dakin, Steven C; Bex, Peter J

    2015-11-25

    While amblyopia involves both binocular imbalance and deficits in processing high spatial frequency information, little is known about the spatial-frequency dependence of binocular imbalance. Here we examined binocular imbalance as a function of spatial frequency in amblyopia using a novel computer-based method. Binocular imbalance at four spatial frequencies was measured with a novel dichoptic letter chart in individuals with amblyopia, or normal vision. Our dichoptic letter chart was composed of band-pass filtered letters arranged in a layout similar to the ETDRS acuity chart. A different chart was presented to each eye of the observer via stereo-shutter glasses. The relative contrast of the corresponding letter in each eye was adjusted by a computer staircase to determine a binocular Balance Point at which the observer reports the letter presented to either eye with equal probability. Amblyopes showed pronounced binocular imbalance across all spatial frequencies, with greater imbalance at high compared to low spatial frequencies (an average increase of 19%, p imbalance may be useful for diagnosing amblyopia and as an outcome measure for recovery of binocular vision following therapy.

  8. Landing performance by low-time private pilots after the sudden loss of binocular vision - Cyclops II

    Science.gov (United States)

    Lewis, C. E., Jr.; Swaroop, R.; Mcmurty, T. C.; Blakeley, W. R.; Masters, R. L.

    1973-01-01

    Study of low-time general aviation pilots, who, in a series of spot landings, were suddenly deprived of binocular vision by patching either eye on the downwind leg of a standard, closed traffic pattern. Data collected during these landings were compared with control data from landings flown with normal vision during the same flight. The sequence of patching and the mix of control and monocular landings were randomized to minimize the effect of learning. No decrease in performance was observed during landings with vision restricted to one eye, in fact, performance improved. This observation is reported at a high level of confidence (p less than 0.001). These findings confirm the previous work of Lewis and Krier and have important implications with regard to aeromedical certification standards.

  9. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study

    International Nuclear Information System (INIS)

    Suenaga, Hideyuki; Tran, Huy Hoang; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Takato, Tsuyoshi

    2015-01-01

    This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject’s anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Accurate registration of the volunteer’s anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications. The online version of this article (doi:10.1186/s12880-015-0089-5) contains supplementary material, which is available to authorized users

  10. On-line measurement of ski-jumper trajectory: combining stereo vision and shape description

    Science.gov (United States)

    Nunner, T.; Sidla, O.; Paar, G.; Nauschnegg, B.

    2010-01-01

    Ski jumping has continuously raised major public interest since the early 70s of the last century, mainly in Europe and Japan. The sport undergoes high-level analysis and development, among others, based on biodynamic measurements during the take-off and flight phase of the jumper. We report on a vision-based solution for such measurements that provides a full 3D trajectory of unique points on the jumper's shape. During the jump synchronized stereo images are taken by a calibrated camera system in video rate. Using methods stemming from video surveillance, the jumper is detected and localized in the individual stereo images, and learning-based deformable shape analysis identifies the jumper's silhouette. The 3D reconstruction of the trajectory takes place on standard stereo forward intersection of distinct shape points, such as helmet top or heel. In the reported study, the measurements are being verified by an independent GPS measurement mounted on top of the Jumper's helmet, synchronized to the timing of camera exposures. Preliminary estimations report an accuracy of +/-20 cm in 30 Hz imaging frequency within 40m trajectory. The system is ready for fully-automatic on-line application on ski-jumping sites that allow stereo camera views with an approximate base-distance ratio of 1:3 within the entire area of investigation.

  11. Design of interpolation functions for subpixel-accuracy stereo-vision systems.

    Science.gov (United States)

    Haller, Istvan; Nedevschi, Sergiu

    2012-02-01

    Traditionally, subpixel interpolation in stereo-vision systems was designed for the block-matching algorithm. During the evaluation of different interpolation strategies, a strong correlation was observed between the type of the stereo algorithm and the subpixel accuracy of the different solutions. Subpixel interpolation should be adapted to each stereo algorithm to achieve maximum accuracy. In consequence, it is more important to propose methodologies for interpolation function generation than specific function shapes. We propose two such methodologies based on data generated by the stereo algorithms. The first proposal uses a histogram to model the environment and applies histogram equalization to an existing solution adapting it to the data. The second proposal employs synthetic images of a known environment and applies function fitting to the resulted data. The resulting function matches the algorithm and the data as best as possible. An extensive evaluation set is used to validate the findings. Both real and synthetic test cases were employed in different scenarios. The test results are consistent and show significant improvements compared with traditional solutions. © 2011 IEEE

  12. Near Point of Convergence Break for Different Age Groups in Turkish Population with Normal Binocular Vision: Normative Data

    Directory of Open Access Journals (Sweden)

    Nihat Sayın

    2013-12-01

    Full Text Available Purpose: The purpose of this study was to evaluate the near point of convergence break in Turkish population with normal binocular vision and to obtain the normative data for the near point of convergence break in different age groups. Such database has not been previously reported. Material and Method: In this prospective study, 329 subjects with normal binocular vision (age range, 3-72 years were evaluated. The near point of convergence break was measured 4 times repeatedly with an accommodative target. Mean values of near point of convergence break were provided for these age groups (≤10, 11-20, 21-30, 31-40, 41-50, 51-60, and >60 years old. A statistical comparison (one-way ANOVA and post-hoc test of these values between age groups was performed. A correlation between the near point of convergence break and age was evaluated by Pearson’s correlation test. Results: The mean value for near point of convergence break was 2.46±1.88 (0.5-14 cm. Specifically, 95% of measurements in all subjects were 60 year-old age groups in the near point of convergence break values (p=0.0001, p=0.0001, p=0.006, p=0.001, p= 0.004. A mild positive correlation was observed between the increase in near point of convergence break and increase of age (r=0.355 (p<0.001. Discussion: The values derived from a relatively large study population to establish a normative database for the near point of convergence break in the Turkish population with normal binocular vision are in relevance with age. This database has not been previously reported. (Turk J Ophthalmol 2013; 43: 402-6

  13. A Framework for Obstacles Avoidance of Humanoid Robot Using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2013-04-01

    Full Text Available In this paper, we propose a framework for multiple moving obstacles avoidance strategy using stereo vision for humanoid robot in indoor environment. We assume that this model of humanoid robot is used as a service robot to deliver a cup to customer from starting point to destination point. We have successfully developed and introduced three main modules to recognize faces, to identify multiple moving obstacles and to initiate a maneuver. A group of people who are walking will be tracked as multiple moving obstacles. Predefined maneuver to avoid obstacles is applied to robot because the limitation of view angle from stereo camera to detect multiple obstacles. The contribution of this research is a new method for multiple moving obstacles avoidance strategy with Bayesian approach using stereo vision based on the direction and speed of obstacles. Depth estimation is used to obtain distance calculation between obstacles and the robot. We present the results of the experiment of the humanoid robot called Gatotkoco II which is used our proposed method and evaluate its performance. The proposed moving obstacles avoidance strategy was tested empirically and proved effective for humanoid robot.

  14. Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.

    Science.gov (United States)

    Rumei Zhang; Hao Liu; Jianda Han

    2017-07-01

    Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.

  15. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    Science.gov (United States)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  16. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    Directory of Open Access Journals (Sweden)

    Chien-Lun Hou

    2011-02-01

    Full Text Available In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  17. Binocular Therapy for Childhood Amblyopia Improves Vision Without Breaking Interocular Suppression.

    Science.gov (United States)

    Bossi, Manuela; Tailor, Vijay K; Anderson, Elaine J; Bex, Peter J; Greenwood, John A; Dahlmann-Noor, Annegret; Dakin, Steven C

    2017-06-01

    Amblyopia is a common developmental visual impairment characterized by a substantial difference in acuity between the two eyes. Current monocular treatments, which promote use of the affected eye by occluding or blurring the fellow eye, improve acuity, but are hindered by poor compliance. Recently developed binocular treatments can produce rapid gains in visual function, thought to be as a result of reduced interocular suppression. We set out to develop an effective home-based binocular treatment system for amblyopia that would engage high levels of compliance but that would also allow us to assess the role of suppression in children's response to binocular treatment. Balanced binocular viewing therapy (BBV) involves daily viewing of dichoptic movies (with "visibility" matched across the two eyes) and gameplay (to monitor compliance and suppression). Twenty-two children (3-11 years) with anisometropic (n = 7; group 1) and strabismic or combined mechanism amblyopia (group 2; n = 6 and 9, respectively) completed the study. Groups 1 and 2 were treated for a maximum of 8 or 24 weeks, respectively. The treatment elicited high levels of compliance (on average, 89.4% ± 24.2% of daily dose in 68.23% ± 12.2% of days on treatment) and led to a mean improvement in acuity of 0.27 logMAR (SD 0.22) for the amblyopic eye. Importantly, acuity gains were not correlated with a reduction in suppression. BBV is a binocular treatment for amblyopia that can be self-administered at home (with remote monitoring), producing rapid and substantial benefits that cannot be solely mediated by a reduction in interocular suppression.

  18. Stereo-vision and 3D reconstruction for nuclear mobile robots

    International Nuclear Information System (INIS)

    Lecoeur-Taibi, I.; Vacherand, F.; Rivallin, P.

    1991-01-01

    In order to perceive the geometric structure of the surrounding environment of a mobile robot, a 3D reconstruction system has been developed. Its main purpose is to provide geometric information to an operator who has to telepilot the vehicle in a nuclear power plant. The perception system is split into two parts: the vision part and the map building part. Vision is enhanced with a fusion process that rejects bas samples over space and time. The vision is based on trinocular stereo-vision which provides a range image of the image contours. It performs line contour correlation on horizontal image pairs and vertical image pairs. The results are then spatially fused in order to have one distance image, with a quality independent of the orientation of the contour. The 3D reconstruction is based on grid-based sensor fusion. As the robot moves and perceives its environment, distance data is accumulated onto a regular square grid, taking into account the uncertainty of the sensor through a sensor measurement statistical model. This approach allows both spatial and temporal fusion. Uncertainty due to sensor position and robot position is also integrated into the absolute local map. This system is modular and generic and can integrate 2D laser range finder and active vision. (author)

  19. A novel method of robot location using RFID and stereo vision

    Science.gov (United States)

    Chen, Diansheng; Zhang, Guanxin; Li, Zhen

    2012-04-01

    This paper proposed a new global localization method for mobile robot based on RFID (Radio Frequency Identification Devices) and stereo vision, which makes the robot obtain global coordinates with good accuracy when quickly adapting to unfamiliar and new environment. This method uses RFID tags as artificial landmarks, the 3D coordinate of the tags under the global coordinate system is written in the IC memory. The robot can read it through RFID reader; meanwhile, using stereo vision, the 3D coordinate of the tags under the robot coordinate system is measured. Combined with the robot's attitude coordinate system transformation matrix from the pose measuring system, the translation of the robot coordinate system to the global coordinate system is obtained, which is also the coordinate of the robot's current location under the global coordinate system. The average error of our method is 0.11m in experience conducted in a 7m×7m lobby, the result is much more accurate than other location method.

  20. Creation Greenhouse Environment Map Using Localization of Edge of Cultivation Platforms Based on Stereo Vision

    Directory of Open Access Journals (Sweden)

    A Nasiri

    2017-10-01

    Full Text Available Introduction Stereo vision means the capability of extracting the depth based on analysis of two images taken from different angles of one scene. The result of stereo vision is a collection of three-dimensional points which describes the details of scene proportional to the resolution of the obtained images. Vehicle automatic steering and crop growth monitoring are two important operations in agricultural precision. The essential aspects of an automated steering are position and orientation of the agricultural equipment in relation to crop row, detection of obstacles and design of path planning between the crop rows. The developed map can provide this information in the real time. Machine vision has the capabilities to perform these tasks in order to execute some operations such as cultivation, spraying and harvesting. In greenhouse environment, it is possible to develop a map and perform an automatic control by detecting and localizing the cultivation platforms as the main moving obstacle. The current work was performed to meet a method based on the stereo vision for detecting and localizing platforms, and then, providing a two-dimensional map for cultivation platforms in the greenhouse environment. Materials and Methods In this research, two webcams, made by Microsoft Corporation with the resolution of 960×544, are connected to the computer via USB2 in order to produce a stereo parallel camera. Due to the structure of cultivation platforms, the number of points in the point cloud will be decreased by extracting the only upper and lower edges of the platform. The proposed method in this work aims at extracting the edges based on depth discontinuous features in the region of platform edge. By getting the disparity image of the platform edges from the rectified stereo images and translating its data to 3D-space, the point cloud model of the environments is constructed. Then by projecting the points to XZ plane and putting local maps together

  1. Incidence of vertical phoria on postural control during binocular vision: what perspective for prevention to nonspecific chronic pain management?

    Science.gov (United States)

    Matheron, Eric; Kapoula, Zoï

    2015-01-01

    Vertical heterophoria (VH) is the latent vertical misalignment of the eyes when the retinal images are dissociated, vertical orthophoria (VO) when there is no misalignment. Studies on postural control, during binocular vision in upright stance, reported that healthy subjects with small VH vs. VO are less stable, but the experimental cancellation of VH with an appropriate prism improves postural stability. The same behavior was recorded in nonspecific chronic back pain subjects, all with VH. It was hypothesized that, without refraction problems, VH indicates a perturbation of the somaesthetic cues required in the sensorimotor loops involved in postural control and the capacity of the CNS to optimally integrate these cues, suggesting prevention possibilities. Sensorimotor conflict can induce pain and modify sensory perception in some healthy subjects; some nonspecific pain or chronic pain could result from such prolonged conflict in which VH could be a sign, with new theoretical and clinical implications.

  2. Stereo-vision-based terrain mapping for off-road autonomous navigation

    Science.gov (United States)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-05-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  3. Stereo Vision Guiding for the Autonomous Landing of Fixed-Wing UAVs: A Saliency-Inspired Approach

    Directory of Open Access Journals (Sweden)

    Zhaowei Ma

    2016-03-01

    Full Text Available It is an important criterion for unmanned aerial vehicles (UAVs to land on the runway safely. This paper concentrates on stereo vision localization of a fixed-wing UAV's autonomous landing within global navigation satellite system (GNSS denied environments. A ground stereo vision guidance system imitating the human visual system (HVS is presented for the autonomous landing of fixed-wing UAVs. A saliency-inspired algorithm is presented and developed to detect flying UAV targets in captured sequential images. Furthermore, an extended Kalman filter (EKF based state estimation is employed to reduce localization errors caused by measurement errors of object detection and pan-tilt unit (PTU attitudes. Finally, stereo-vision-dataset-based experiments are conducted to verify the effectiveness of the proposed visual detection method and error correction algorithm. The compared results between the visual guidance approach and differential GPS-based approach indicate that the stereo vision system and detection method can achieve the better guiding effect.

  4. When Dijkstra Meets Vanishing Point: A Stereo Vision Approach for Road Detection.

    Science.gov (United States)

    Zhang, Yigong; Su, Yingna; Yang, Jian; Ponce, Jean; Kong, Hui

    2018-05-01

    In this paper, we propose a vanishing-point constrained Dijkstra road model for road detection in a stereo-vision paradigm. First, the stereo-camera is used to generate the u- and v-disparity maps of road image, from which the horizon can be extracted. With the horizon and ground region constraints, we can robustly locate the vanishing point of road region. Second, a weighted graph is constructed using all pixels of the image, and the detected vanishing point is treated as the source node of the graph. By computing a vanishing-point constrained Dijkstra minimum-cost map, where both disparity and gradient of gray image are used to calculate cost between two neighbor pixels, the problem of detecting road borders in image is transformed into that of finding two shortest paths that originate from the vanishing point to two pixels in the last row of image. The proposed approach has been implemented and tested over 2600 grayscale images of different road scenes in the KITTI data set. The experimental results demonstrate that this training-free approach can detect horizon, vanishing point, and road regions very accurately and robustly. It can achieve promising performance.

  5. Determining the orientation of the observed object in threedimensional space using stereo vision methods

    International Nuclear Information System (INIS)

    Ponomarev, S

    2014-01-01

    The task of matching image of an object with its template is central for many optoelectronic systems. Solution of the matching problem in three-dimensional space in contrast to the structural alignment in the image plane allows using a larger amount of information about the object for determining its orientation, which may increase the probability of correct matching. In the case of stereo vision methods for constructing a three-dimensional image of the object, it becomes possible to achieve invariance w.r.t. background and distance to the observed object. Only three of the orientation angle of the object relative to the camera are uncertain and require measurements. This paper proposes a method for determining the orientation angles of the observed object in three-dimensional space, which is based on the processing of stereo image sequences. Disparity map segmentation method that allows one to ensure the invariance of the background is presented. Quantitative estimates of the effectiveness of the proposed method are presented and discussed.

  6. In-vehicle stereo vision system for identification of traffic conflicts between bus and pedestrian

    Directory of Open Access Journals (Sweden)

    Salvatore Cafiso

    2017-02-01

    Full Text Available The traffic conflict technique (TCT was developed as “surrogate measure of road safety” to identify near-crash events by using measures of the spatial and temporal proximity of road users. Traditionally applications of TCT focus on a specific site by the way of manually or automated supervision. Nowadays the development of in-vehicle (IV technologies provides new opportunities for monitoring driver behavior and interaction with other road users directly into the traffic stream. In the paper a stereo vision and GPS system for traffic conflict investigation is presented for detecting conflicts between vehicle and pedestrian. The system is able to acquire geo-referenced sequences of stereo frames that are used to provide real time information related to conflict occurrence and severity. As case study, an urban bus was equipped with a prototype of the system and a trial in the city of Catania (Italy was carried out analyzing conflicts with pedestrian crossing in front of the bus. Experimental results pointed out the potentialities of the system for collection of data that can be used to get suitable traffic conflict measures. Specifically, a risk index of the conflict between pedestrians and vehicles is proposed to classify collision probability and severity using data collected by the system. This information may be used to develop in-vehicle warning systems and urban network risk assessment.

  7. Development of a teaching system for an industrial robot using stereo vision

    Science.gov (United States)

    Ikezawa, Kazuya; Konishi, Yasuo; Ishigaki, Hiroyuki

    1997-12-01

    The teaching and playback method is mainly a teaching technique for industrial robots. However, this technique takes time and effort in order to teach. In this study, a new teaching algorithm using stereo vision based on human demonstrations in front of two cameras is proposed. In the proposed teaching algorithm, a robot is controlled repetitively according to angles determined by the fuzzy sets theory until it reaches an instructed teaching point, which is relayed through cameras by an operator. The angles are recorded and used later in playback. The major advantage of this algorithm is that no calibrations are needed. This is because the fuzzy sets theory, which is able to express qualitatively the control commands to the robot, is used instead of conventional kinematic equations. Thus, a simple and easy teaching operation is realized with this teaching algorithm. Simulations and experiments have been performed on the proposed teaching system, and data from testing has confirmed the usefulness of our design.

  8. Motorcycle That See: Multifocal Stereo Vision Sensor for Advanced Safety Systems in Tilting Vehicles

    Directory of Open Access Journals (Sweden)

    Gustavo Gil

    2018-01-01

    Full Text Available Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications.

  9. Motorcycles that See: Multifocal Stereo Vision Sensor for Advanced Safety Systems in Tilting Vehicles

    Science.gov (United States)

    2018-01-01

    Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications. PMID:29351267

  10. Motorcycle That See: Multifocal Stereo Vision Sensor for Advanced Safety Systems in Tilting Vehicles.

    Science.gov (United States)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-01-19

    Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications.

  11. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    Science.gov (United States)

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-09-14

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  12. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    Science.gov (United States)

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-01-01

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178

  13. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    Directory of Open Access Journals (Sweden)

    Basam Musleh

    2016-09-01

    Full Text Available Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels and the vehicle environment (meters depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  14. Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks

    Science.gov (United States)

    Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min

    2015-10-01

    Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.

  15. Surface modeling method for aircraft engine blades by using speckle patterns based on the virtual stereo vision system

    Science.gov (United States)

    Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang

    2018-03-01

    A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.

  16. Estimation of 3D reconstruction errors in a stereo-vision system

    Science.gov (United States)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  17. Comparative morphometry of facial surface models obtained from a stereo vision system in a healthy population

    Science.gov (United States)

    López, Leticia; Gastélum, Alfonso; Chan, Yuk Hin; Delmas, Patrice; Escorcia, Lilia; Márquez, Jorge

    2014-11-01

    Our goal is to obtain three-dimensional measurements of craniofacial morphology in a healthy population, using standard landmarks established by a physical-anthropology specialist and picked from computer reconstructions of the face of each subject. To do this, we designed a multi-stereo vision system that will be used to create a data base of human faces surfaces from a healthy population, for eventual applications in medicine, forensic sciences and anthropology. The acquisition process consists of obtaining the depth map information from three points of views, each depth map is obtained from a calibrated pair of cameras. The depth maps are used to build a complete, frontal, triangular-surface representation of the subject face. The triangular surface is used to locate the landmarks and the measurements are analyzed with a MATLAB script. The classification of the subjects was done with the aid of a specialist anthropologist that defines specific subject indices, according to the lengths, areas, ratios, etc., of the different structures and the relationships among facial features. We studied a healthy population and the indices from this population will be used to obtain representative averages that later help with the study and classification of possible pathologies.

  18. Binocular vision, the optic chiasm, and their associations with vertebrate motor behavior

    Directory of Open Access Journals (Sweden)

    Matz Lennart Larsson

    2015-07-01

    Full Text Available Ipsilateral retinal projections (IRP in the optic chiasm (OC vary considerably. Most animal groups possess laterally situated eyes and no or few IRP, but, e.g. cats and primates have frontal eyes and high proportions of IRP. The traditional hypothesis that bifocal vision developed to enable predation or to increase perception in restricted light conditions applies mainly to mammals. The eye-forelimb (EF hypothesis presented here suggests that the reception of visual feedback of limb movements in the limb steering cerebral hemisphere was the fundamental mechanism behind the OC evolution. In other words, that evolutionary change in the OC was necessary to preserve hemispheric autonomy. In the majority of vertebrates, motor processing, tactile, proprioceptive, and visual information involved in steering the hand (limb, paw, fin is primarily received only in the contralateral hemisphere, while multisensory information from the ipsilateral limb is minimal. Since the involved motor nuclei, somatosensory areas, and vision neurons are situated in same hemisphere, the neuronal pathways involved will be relatively short, optimizing the size of the brain. That would not have been possible without, evolutionary modifications of IRP. Multiple axon-guidance genes, which determine whether axons will cross the midline or not, have shaped the OC anatomy. Evolutionary change in the OC seems to be key to preserving hemispheric autonomy when the body and eye evolve to fit new ecological niches. The EF hypothesis may explain the low proportion of IRP in birds, reptiles, and most fishes; the relatively high proportions of IRP in limbless vertebrates; high proportions of IRP in arboreal, in contrast to ground-dwelling, marsupials; the lack of IRP in dolphins; abundant IRP in primates and most predatory mammals, and why IRP emanate exclusively from the temporal retina. The EF hypothesis seams applicable to vertebrates in general and hence more parsimonious than

  19. Improved Binocular Outcomes Following Binocular Treatment for Childhood Amblyopia.

    Science.gov (United States)

    Kelly, Krista R; Jost, Reed M; Wang, Yi-Zhong; Dao, Lori; Beauchamp, Cynthia L; Leffler, Joel N; Birch, Eileen E

    2018-03-01

    Childhood amblyopia can be treated with binocular games or movies that rebalance contrast between the eyes, which is thought to reduce depth of interocular suppression so the child can experience binocular vision. While visual acuity gains have been reported following binocular treatment, studies rarely report gains in binocular outcomes (i.e., stereoacuity, suppression) in amblyopic children. Here, we evaluated binocular outcomes in children who had received binocular treatment for childhood amblyopia. Data for amblyopic children enrolled in two ongoing studies were pooled. The sample included 41 amblyopic children (6 strabismic, 21 anisometropic, 14 combined; age 4-10 years; ≤4 prism diopters [PD]) who received binocular treatment (20 game, 21 movies; prescribed 9-10 hours treatment). Amblyopic eye visual acuity and binocular outcomes (Randot Preschool Stereoacuity, extent of suppression, and depth of suppression) were assessed at baseline and at 2 weeks. Mean amblyopic eye visual acuity (P suppression (P = 0.003) were reduced from baseline at the 2-week visit (87% game adherence, 100% movie adherence). Depth of suppression was reduced more in children aged suppression was correlated with a larger depth of suppression reduction at 2 weeks (P = 0.001). After 2 weeks, binocular treatment in amblyopic children improved visual acuity and binocular outcomes, reducing the extent and depth of suppression and improving stereoacuity. Binocular treatments that rebalance contrast to overcome suppression are a promising additional option for treating amblyopia.

  20. Prism-based single-camera system for stereo display

    Science.gov (United States)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  1. Global stereo matching algorithm based on disparity range estimation

    Science.gov (United States)

    Li, Jing; Zhao, Hong; Gu, Feifei

    2017-09-01

    The global stereo matching algorithms are of high accuracy for the estimation of disparity map, but the time-consuming in the optimization process still faces a curse, especially for the image pairs with high resolution and large baseline setting. To improve the computational efficiency of the global algorithms, a disparity range estimation scheme for the global stereo matching is proposed to estimate the disparity map of rectified stereo images in this paper. The projective geometry in a parallel binocular stereo vision is investigated to reveal a relationship between two disparities at each pixel in the rectified stereo images with different baselines, which can be used to quickly obtain a predicted disparity map in a long baseline setting estimated by that in the small one. Then, the drastically reduced disparity ranges at each pixel under a long baseline setting can be determined by the predicted disparity map. Furthermore, the disparity range estimation scheme is introduced into the graph cuts with expansion moves to estimate the precise disparity map, which can greatly save the cost of computing without loss of accuracy in the stereo matching, especially for the dense global stereo matching, compared to the traditional algorithm. Experimental results with the Middlebury stereo datasets are presented to demonstrate the validity and efficiency of the proposed algorithm.

  2. Three-dimensional sensing methodology combining stereo vision and phase-measuring profilometry based on dynamic programming

    Science.gov (United States)

    Lee, Hyunki; Kim, Min Young; Moon, Jeon Il

    2017-12-01

    Phase measuring profilometry and moiré methodology have been widely applied to the three-dimensional shape measurement of target objects, because of their high measuring speed and accuracy. However, these methods suffer from inherent limitations called a correspondence problem, or 2π-ambiguity problem. Although a kind of sensing method to combine well-known stereo vision and phase measuring profilometry (PMP) technique simultaneously has been developed to overcome this problem, it still requires definite improvement for sensing speed and measurement accuracy. We propose a dynamic programming-based stereo PMP method to acquire more reliable depth information and in a relatively small time period. The proposed method efficiently fuses information from two stereo sensors in terms of phase and intensity simultaneously based on a newly defined cost function of dynamic programming. In addition, the important parameters are analyzed at the view point of the 2π-ambiguity problem and measurement accuracy. To analyze the influence of important hardware and software parameters related to the measurement performance and to verify its efficiency, accuracy, and sensing speed, a series of experimental tests were performed with various objects and sensor configurations.

  3. Amblyopia and Binocular Vision

    OpenAIRE

    Birch, Eileen E.

    2012-01-01

    Amblyopia is the most common cause of monocular visual loss in children, affecting 1.3% to 3.6% of children. Current treatments are effective in reducing the visual acuity deficit but many amblyopic individuals are left with residual visual acuity deficits, ocular motor abnormalities, deficient fine motor skills, and risk for recurrent amblyopia. Using a combination of psychophysical, electrophysiological, imaging, risk factor analysis, and fine motor skill assessment, the primary role of bin...

  4. Application of stereo-imaging technology to medical field.

    Science.gov (United States)

    Nam, Kyoung Won; Park, Jeongyun; Kim, In Young; Kim, Kwang Gi

    2012-09-01

    There has been continuous development in the area of stereoscopic medical imaging devices, and many stereoscopic imaging devices have been realized and applied in the medical field. In this article, we review past and current trends pertaining to the application stereo-imaging technologies in the medical field. We describe the basic principles of stereo vision and visual issues related to it, including visual discomfort, binocular disparities, vergence-accommodation mismatch, and visual fatigue. We also present a brief history of medical applications of stereo-imaging techniques, examples of recently developed stereoscopic medical devices, and patent application trends as they pertain to stereo-imaging medical devices. Three-dimensional (3D) stereo-imaging technology can provide more realistic depth perception to the viewer than conventional two-dimensional imaging technology. Therefore, it allows for a more accurate understanding and analysis of the morphology of an object. Based on these advantages, the significance of stereoscopic imaging in the medical field increases in accordance with the increase in the number of laparoscopic surgeries, and stereo-imaging technology plays a key role in the diagnoses of the detailed morphologies of small biological specimens. The application of 3D stereo-imaging technology to the medical field will help improve surgical accuracy, reduce operation times, and enhance patient safety. Therefore, it is important to develop more enhanced stereoscopic medical devices.

  5. Binocular astronomy

    CERN Document Server

    Tonkin, Stephen

    2014-01-01

    Binoculars have, for many, long been regarded as an “entry level” observational tool, and relatively few have used them as a serious observing instrument. This is changing! Many people appreciate the relative comfort of two-eyed observing, but those who use binoculars come to realize that they offer more than comfort. The view of the stars is more aesthetically pleasing and therefore binocular observers tend to observe more frequently and for longer periods. Binocular Astronomy, 2nd Edition, extends its coverage of small and medium binoculars to large and giant (i.e., up to 300mm aperture) binoculars and also binoviewers, which brings the work into the realm of serious observing instruments. Additionally, it goes far deeper into the varying optical characteristics of binoculars, giving newcomers and advanced astronomers the information needed to make informed choices on purchasing a pair. It also covers relevant aspects of the physiology of binocular (as in “both eyes”) observation. The first edition ...

  6. A study on low-cost, high-accuracy, and real-time stereo vision algorithms for UAV power line inspection

    Science.gov (United States)

    Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue

    2018-04-01

    Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.

  7. Binocular vision in a virtual world: visual deficits following the wearing of a head-mounted display.

    Science.gov (United States)

    Mon-Williams, M; Wann, J P; Rushton, S

    1993-10-01

    The short-term effects on binocular stability of wearing a conventional head-mounted display (HMD) to explore a virtual reality environment were examined. Twenty adult subjects (aged 19-29 years) wore a commercially available HMD for 10 min while cycling around a computer generated 3-D world. The twin screen presentations were set to suit the average interpupillary distance of our subject population, to mimic the conditions of public access virtual reality systems. Subjects were examined before and after exposure to the HMD and there were clear signs of induced binocular stress for a number of the subjects. The implications of introducing such HMDs into the workplace and entertainment environments are discussed.

  8. Kinder, gentler stereo

    Science.gov (United States)

    Siegel, Mel; Tobinaga, Yoshikazu; Akiya, Takeo

    1999-05-01

    Not only binocular perspective disparity, but also many secondary binocular and monocular sensory phenomena, contribute to the human sensation of depth. Binocular perspective disparity is notable as the strongest depth perception factor. However means for creating if artificially from flat image pairs are notorious for inducing physical and mental stresses, e.g., 'virtual reality sickness'. Aiming to deliver a less stressful 'kinder gentler stereo (KGS)', we systematically examine the secondary phenomena and their synergistic combination with each other and with binocular perspective disparity. By KGS we mean a stereo capture, rendering, and display paradigm without cue conflicts, without eyewear, without viewing zones, with negligible 'lock-in' time to perceive the image in depth, and with a normal appearance for stereo-deficient viewers. To achieve KGS we employ optical and digital image processing steps that introduce distortions contrary to strict 'geometrical correctness' of binocular perspective but which nevertheless result in increased stereoscopic viewing comfort. We particularly exploit the lower limits of interoccular separation, showing that unexpectedly small disparities stimulate accurate and pleasant depth sensations. Under these circumstances crosstalk is perceived as depth-of-focus rather than as ghosting. This suggests the possibility of radically new approaches to stereoview multiplexing that enable zoneless autostereoscopic display.

  9. Bayes filter modification for drivability map estimation with observations from stereo vision

    Science.gov (United States)

    Panchenko, Aleksei; Prun, Viktor; Turchenkov, Dmitri

    2017-02-01

    Reconstruction of a drivability map for a moving vehicle is a well-known research topic in applied robotics. Here creating such a map for an autonomous truck on a generally planar surface containing separate obstacles is considered. The source of measurements for the truck is a calibrated pair of cameras. The stereo system detects and reconstructs several types of objects, such as road borders, other vehicles, pedestrians and general tall objects or highly saturated objects (e.g. road cone). For creating a robust mapping module we use a modification of Bayes filtering, which introduces some novel techniques for occupancy map update step. Specifically, our modified version becomes applicable to the presence of false positive measurement errors, stereo shading and obstacle occlusion. We implemented the technique and achieved real-time 15 FPS computations on an industrial shake proof PC. Our real world experiments show the positive effect of the filtering step.

  10. A Matlab-Based Testbed for Integration, Evaluation and Comparison of Heterogeneous Stereo Vision Matching Algorithms

    Directory of Open Access Journals (Sweden)

    Raul Correal

    2016-11-01

    Full Text Available Stereo matching is a heavily researched area with a prolific published literature and a broad spectrum of heterogeneous algorithms available in diverse programming languages. This paper presents a Matlab-based testbed that aims to centralize and standardize this variety of both current and prospective stereo matching approaches. The proposed testbed aims to facilitate the application of stereo-based methods to real situations. It allows for configuring and executing algorithms, as well as comparing results, in a fast, easy and friendly setting. Algorithms can be combined so that a series of processes can be chained and executed consecutively, using the output of a process as input for the next; some additional filtering and image processing techniques have been included within the testbed for this purpose. A use case is included to illustrate how these processes are sequenced and its effect on the results for real applications. The testbed has been conceived as a collaborative and incremental open-source project, where its code is accessible and modifiable, with the objective of receiving contributions and releasing future versions to include new algorithms and features. It is currently available online for the research community.

  11. P3-4: Binocular Visual Acuity in Exotropia

    Directory of Open Access Journals (Sweden)

    Heekyung Yang

    2012-10-01

    Full Text Available Purpose: To investigate binocular interaction of visual acuity in patients with intermittent exotropia and its relationship with accommodative responses during binocular vision. Methods: Sixty-seven patients with intermittent exotropia of 8 years or older were included. Binocular visual acuity (BVA and monocular visual acuity (MVA were measured in sequence. Accommodative responses of both eyes were measured using the WAM-5500 autorefractor/keratometer (GrandSeiko, Fukuyama, Japan during binocular and monocular viewing conditions at 6 m. Accommodative responses during binocular vision were calculated using the difference between the refractive errors of binocular and monocular vision. Main outcome measures: Binocular interactions of visual acuity were categorized as binocular summation, equivalency, or inhibition. The prevalence of the 3 patterns of binocular interaction was investigated. Accommodative responses were correlated with differences between BVA and better MVA. Results: Most patients (41 patients, 61.2% showed binocular equivalency. Binocular inhibition and summation were noted in 6 (9.0% and 20 (29.9% patients, respectively. Linear regression analysis revealed a significant correlation between binocular interaction and accommodative responses during binocular vision (p < .001. Accommodative responses significantly correlated with the angle of exodeviation at distance (p = .002. Conclusions: In patients with intermittent exotropia, binocular inhibition is associated with increased accommodation and a larger angle of exodeviation, suggesting that accommodative convergence is a mechanism that maintains ocular alignment. Thus, BVA inhibition may be attributed to diminishing fusional control in patients with intermittent exotropia.

  12. A Novel Approach to Calibrating Multifunctional Binocular Stereovision Sensor

    International Nuclear Information System (INIS)

    Xue, T; Zhu, J G; Wu, B; Ye, S H

    2006-01-01

    We present a novel multifunctional binocular stereovision sensor for various threedimensional (3D) inspection tasks. It not only avoids the so-called correspondence problem of passive stereo vision, but also possesses the uniform mathematical model. We also propose a novel approach to estimating all the sensor parameters with free-position planar reference object. In this technique, the planar pattern can be moved freely by hand. All the camera intrinsic and extrinsic parameters with coefficient of lens radial and tangential distortion are estimated, and sensor parameters are calibrated based on the 3D measurement model and optimized with the feature point constraint algorithm using the same views in the camera calibration stage. The proposed approach greatly reduces the cost of the calibration equipment, and it is flexible and practical for the vision measurement. It shows that this method has high precision by experiment, and the sensor measured relative error of space length excels 0.3%

  13. A novel apparatus for testing binocular function using the 'CyberDome' three-dimensional hemispherical visual display system.

    Science.gov (United States)

    Handa, T; Ishikawa, H; Shimizu, K; Kawamura, R; Nakayama, H; Sawada, K

    2009-11-01

    Virtual reality has recently been highlighted as a promising medium for visual presentation and entertainment. A novel apparatus for testing binocular visual function using a hemispherical visual display system, 'CyberDome', has been developed and tested. Subjects comprised 40 volunteers (mean age, 21.63 years) with corrected visual acuity of -0.08 (LogMAR) or better, and stereoacuity better than 100 s of arc on the Titmus stereo test. Subjects were able to experience visual perception like being surrounded by visual images, a feature of the 'CyberDome' hemispherical visual display system. Visual images to the right and left eyes were projected and superimposed on the dome screen, allowing test images to be seen independently by each eye using polarizing glasses. The hemispherical visual display was 1.4 m in diameter. Three test parameters were evaluated: simultaneous perception (subjective angle of strabismus), motor fusion amplitude (convergence and divergence), and stereopsis (binocular disparity at 1260, 840, and 420 s of arc). Testing was performed in volunteer subjects with normal binocular vision, and results were compared with those using a major amblyoscope. Subjective angle of strabismus and motor fusion amplitude showed a significant correlation between our test and the major amblyoscope. All subjects could perceive the stereoscopic target with a binocular disparity of 480 s of arc. Our novel apparatus using the CyberDome, a hemispherical visual display system, was able to quantitatively evaluate binocular function. This apparatus offers clinical promise in the evaluation of binocular function.

  14. Interactive stereo games to improve vision in children with amblyopia using dichoptic stimulation

    Science.gov (United States)

    Herbison, Nicola; Ash, Isabel M.; MacKeith, Daisy; Vivian, Anthony; Purdy, Jonathan H.; Fakis, Apostolos; Cobb, Sue V.; Hepburn, Trish; Eastgate, Richard M.; Gregson, Richard M.; Foss, Alexander J. E.

    2015-03-01

    Amblyopia is a common condition affecting 2% of all children and traditional treatment consists of either wearing a patch or penalisation. We have developed a treatment using stereo technology, not to provide a 3D image but to allow dichoptic stimulation. This involves presenting an image with the same background to both eyes but with features of interest removed from the image presented to the normal eye with the aim to preferentially stimulated visual development in the amblyopic, or lazy, eye. Our system, called I-BiT can use either a game or a video (DVD) source as input. Pilot studies show that this treatment is effective with short treatment times and has proceeded to randomised controlled clinical trial. The early indications are that the treatment has a high degree of acceptability and corresponding good compliance.

  15. A phase-based stereo vision system-on-a-chip.

    Science.gov (United States)

    Díaz, Javier; Ros, Eduardo; Sabatini, Silvio P; Solari, Fabio; Mota, Sonia

    2007-02-01

    A simple and fast technique for depth estimation based on phase measurement has been adopted for the implementation of a real-time stereo system with sub-pixel resolution on an FPGA device. The technique avoids the attendant problem of phase warping. The designed system takes full advantage of the inherent processing parallelism and segmentation capabilities of FPGA devices to achieve a computation speed of 65megapixels/s, which can be arranged with a customized frame-grabber module to process 211frames/s at a size of 640x480 pixels. The processing speed achieved is higher than conventional camera frame rates, thus allowing the system to extract multiple estimations and be used as a platform to evaluate integration schemes of a population of neurons without increasing hardware resource demands.

  16. Capturing age-related changes in functional contrast sensitivity with decreasing light levels in monocular and binocular vision.

    Science.gov (United States)

    Gillespie-Gallery, Hanna; Konstantakopoulou, Evgenia; Harlow, Jonathan A; Barbur, John L

    2013-09-09

    It is challenging to separate the effects of normal aging of the retina and visual pathways independently from optical factors, decreased retinal illuminance, and early stage disease. This study determined limits to describe the effect of light level on normal, age-related changes in monocular and binocular functional contrast sensitivity. We recruited 95 participants aged 20 to 85 years. Contrast thresholds for correct orientation discrimination of the gap in a Landolt C optotype were measured using a 4-alternative, forced-choice (4AFC) procedure at screen luminances from 34 to 0.12 cd/m(2) at the fovea and parafovea (0° and ±4°). Pupil size was measured continuously. The Health of the Retina index (HRindex) was computed to capture the loss of contrast sensitivity with decreasing light level. Participants were excluded if they exhibited performance outside the normal limits of interocular differences or HRindex values, or signs of ocular disease. Parafoveal contrast thresholds showed a steeper decline and higher correlation with age at the parafovea than the fovea. Of participants with clinical signs of ocular disease, 83% had HRindex values outside the normal limits. Binocular summation of contrast signals declined with age, independent of interocular differences. The HRindex worsens more rapidly with age at the parafovea, consistent with histologic findings of rod loss and its link to age-related degenerative disease of the retina. The HRindex and interocular differences could be used to screen for and separate the earliest stages of subclinical disease from changes caused by normal aging.

  17. Comparison on testability of visual acuity, stereo acuity and colour vision tests between children with learning disabilities and children without learning disabilities in government primary schools.

    Science.gov (United States)

    Abu Bakar, Nurul Farhana; Chen, Ai-Hong

    2014-02-01

    Children with learning disabilities might have difficulties to communicate effectively and give reliable responses as required in various visual function testing procedures. The purpose of this study was to compare the testability of visual acuity using the modified Early Treatment Diabetic Retinopathy Study (ETDRS) and Cambridge Crowding Cards, stereo acuity using Lang Stereo test II and Butterfly stereo tests and colour perception using Colour Vision Test Made Easy (CVTME) and Ishihara's Test for Colour Deficiency (Ishihara Test) between children in mainstream classes and children with learning disabilities in special education classes in government primary schools. A total of 100 primary school children (50 children from mainstream classes and 50 children from special education classes) matched in age were recruited in this cross-sectional comparative study. The testability was determined by the percentage of children who were able to give reliable respond as required by the respective tests. 'Unable to test' was defined as inappropriate response or uncooperative despite best efforts of the screener. The testability of the modified ETDRS, Butterfly stereo test and Ishihara test for respective visual function tests were found lower among children in special education classes ( P learning disabilities. Modifications of vision testing procedures are essential for children with learning disabilities.

  18. Using Stereo Vision to Support the Automated Analysis of Surveillance Videos

    Science.gov (United States)

    Menze, M.; Muhle, D.

    2012-07-01

    Video surveillance systems are no longer a collection of independent cameras, manually controlled by human operators. Instead, smart sensor networks are developed, able to fulfil certain tasks on their own and thus supporting security personnel by automated analyses. One well-known task is the derivation of people's positions on a given ground plane from monocular video footage. An improved accuracy for the ground position as well as a more detailed representation of single salient people can be expected from a stereoscopic processing of overlapping views. Related work mostly relies on dedicated stereo devices or camera pairs with a small baseline. While this set-up is helpful for the essential step of image matching, the high accuracy potential of a wide baseline and the according good intersection geometry is not utilised. In this paper we present a stereoscopic approach, working on overlapping views of standard pan-tilt-zoom cameras which can easily be generated for arbitrary points of interest by an appropriate reconfiguration of parts of a sensor network. Experiments are conducted on realistic surveillance footage to show the potential of the suggested approach and to investigate the influence of different baselines on the quality of the derived surface model. Promising estimations of people's position and height are retrieved. Although standard matching approaches show helpful results, future work will incorporate temporal dependencies available from image sequences in order to reduce computational effort and improve the derived level of detail.

  19. USING STEREO VISION TO SUPPORT THE AUTOMATED ANALYSIS OF SURVEILLANCE VIDEOS

    Directory of Open Access Journals (Sweden)

    M. Menze

    2012-07-01

    Full Text Available Video surveillance systems are no longer a collection of independent cameras, manually controlled by human operators. Instead, smart sensor networks are developed, able to fulfil certain tasks on their own and thus supporting security personnel by automated analyses. One well-known task is the derivation of people’s positions on a given ground plane from monocular video footage. An improved accuracy for the ground position as well as a more detailed representation of single salient people can be expected from a stereoscopic processing of overlapping views. Related work mostly relies on dedicated stereo devices or camera pairs with a small baseline. While this set-up is helpful for the essential step of image matching, the high accuracy potential of a wide baseline and the according good intersection geometry is not utilised. In this paper we present a stereoscopic approach, working on overlapping views of standard pan-tilt-zoom cameras which can easily be generated for arbitrary points of interest by an appropriate reconfiguration of parts of a sensor network. Experiments are conducted on realistic surveillance footage to show the potential of the suggested approach and to investigate the influence of different baselines on the quality of the derived surface model. Promising estimations of people’s position and height are retrieved. Although standard matching approaches show helpful results, future work will incorporate temporal dependencies available from image sequences in order to reduce computational effort and improve the derived level of detail.

  20. Acceleration of stereo-matching on multi-core CPU and GPU

    OpenAIRE

    Tian, Xu; Cockshott, Paul; Oehler, Susanne

    2014-01-01

    This paper presents an accelerated version of a\\ud dense stereo-correspondence algorithm for two different parallelism\\ud enabled architectures, multi-core CPU and GPU. The\\ud algorithm is part of the vision system developed for a binocular\\ud robot-head in the context of the CloPeMa 1 research project.\\ud This research project focuses on the conception of a new clothes\\ud folding robot with real-time and high resolution requirements\\ud for the vision system. The performance analysis shows th...

  1. Quality inspection guided laser processing of irregular shape objects by stereo vision measurement: application in badminton shuttle manufacturing

    Science.gov (United States)

    Qi, Li; Wang, Shun; Zhang, Yixin; Sun, Yingying; Zhang, Xuping

    2015-11-01

    The quality inspection process is usually carried out after first processing of the raw materials such as cutting and milling. This is because the parts of the materials to be used are unidentified until they have been trimmed. If the quality of the material is assessed before the laser process, then the energy and efforts wasted on defected materials can be saved. We proposed a new production scheme that can achieve quantitative quality inspection prior to primitive laser cutting by means of three-dimensional (3-D) vision measurement. First, the 3-D model of the object is reconstructed by the stereo cameras, from which the spatial cutting path is derived. Second, collaborating with another rear camera, the 3-D cutting path is reprojected to both the frontal and rear views of the object and thus generates the regions-of-interest (ROIs) for surface defect analysis. An accurate visual guided laser process and reprojection-based ROI segmentation are enabled by a global-optimization-based trinocular calibration method. The prototype system was built and tested with the processing of raw duck feathers for high-quality badminton shuttle manufacture. Incorporating with a two-dimensional wavelet-decomposition-based defect analysis algorithm, both the geometrical and appearance features of the raw feathers are quantified before they are cut into small patches, which result in fully automatic feather cutting and sorting.

  2. Real-time stereo generation for surgical vision during minimal invasive robotic surgery

    Science.gov (United States)

    Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod

    2016-03-01

    This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.

  3. The zone of comfort: Predicting visual discomfort with stereo displays

    Science.gov (United States)

    Shibata, Takashi; Kim, Joohwan; Hoffman, David M.; Banks, Martin S.

    2012-01-01

    Recent increased usage of stereo displays has been accompanied by public concern about potential adverse effects associated with prolonged viewing of stereo imagery. There are numerous potential sources of adverse effects, but we focused on how vergence–accommodation conflicts in stereo displays affect visual discomfort and fatigue. In one experiment, we examined the effect of viewing distance on discomfort and fatigue. We found that conflicts of a given dioptric value were slightly less comfortable at far than at near distance. In a second experiment, we examined the effect of the sign of the vergence–accommodation conflict on discomfort and fatigue. We found that negative conflicts (stereo content behind the screen) are less comfortable at far distances and that positive conflicts (content in front of screen) are less comfortable at near distances. In a third experiment, we measured phoria and the zone of clear single binocular vision, which are clinical measurements commonly associated with correcting refractive error. Those measurements predicted susceptibility to discomfort in the first two experiments. We discuss the relevance of these findings for a wide variety of situations including the viewing of mobile devices, desktop displays, television, and cinema. PMID:21778252

  4. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    Science.gov (United States)

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  5. Does partial occlusion promote normal binocular function?

    Science.gov (United States)

    Li, Jingrong; Thompson, Benjamin; Ding, Zhaofeng; Chan, Lily Y L; Chen, Xiang; Yu, Minbin; Deng, Daming; Hess, Robert F

    2012-10-03

    There is growing evidence that abnormal binocular interactions play a key role in the amblyopia syndrome and represent a viable target for treatment interventions. In this context the use of partial occlusion using optical devices such as Bangerter filters as an alternative to complete occlusion is of particular interest. The aims of this study were to understand why Bangerter filters do not result in improved binocular outcomes compared to complete occlusion, and to compare the effects of Bangerter filters, optical blur and neutral density (ND) filters on normal binocular function. The effects of four strengths of Bangerter filters (0.8, 0.6, 0.4, 0.2) on letter and vernier acuity, contrast sensitivity, stereoacuity, and interocular suppression were measured in 21 observers with normal vision. In a subset of 14 observers, the partial occlusion effects of Bangerter filters, ND filters and plus lenses on stereopsis and interocular suppression were compared. Bangerter filters did not have graded effect on vision and induced significant disruption to binocular function. This disruption was greater than that of monocular defocus but weaker than that of ND filters. The effect of the Bangerter filters on stereopsis was more pronounced than their effect on monocular acuity, and the induced monocular acuity deficits did not predict the induced deficits in stereopsis. Bangerter filters appear to be particularly disruptive to binocular function. Other interventions, such as optical defocus and those employing computer generated dichoptic stimulus presentation, may be more appropriate than partial occlusion for targeting binocular function during amblyopia treatment.

  6. Approaches for Stereo Matching

    Directory of Open Access Journals (Sweden)

    Takouhi Ozanian

    1995-04-01

    Full Text Available This review focuses on the last decade's development of the computational stereopsis for recovering three-dimensional information. The main components of the stereo analysis are exposed: image acquisition and camera modeling, feature selection, feature matching and disparity interpretation. A brief survey is given of the well known feature selection approaches and the estimation parameters for this selection are mentioned. The difficulties in identifying correspondent locations in the two images are explained. Methods as to how effectively to constrain the search for correct solution of the correspondence problem are discussed, as are strategies for the whole matching process. Reasons for the occurrence of matching errors are considered. Some recently proposed approaches, employing new ideas in the modeling of stereo matching in terms of energy minimization, are described. Acknowledging the importance of computation time for real-time applications, special attention is paid to parallelism as a way to achieve the required level of performance. The development of trinocular stereo analysis as an alternative to the conventional binocular one, is described. Finally a classification based on the test images for verification of the stereo matching algorithms, is supplied.

  7. Ground Stereo Vision-Based Navigation for Autonomous Take-off and Landing of UAVs: A Chan-Vese Model Approach

    Directory of Open Access Journals (Sweden)

    Dengqing Tang

    2016-04-01

    Full Text Available This article aims at flying target detection and localization of a fixed-wing unmanned aerial vehicle (UAV autonomous take-off and landing within Global Navigation Satellite System (GNSS-denied environments. A Chan-Vese model–based approach is proposed and developed for ground stereo vision detection. Extended Kalman Filter (EKF is fused into state estimation to reduce the localization inaccuracy caused by measurement errors of object detection and Pan-Tilt unit (PTU attitudes. Furthermore, the region-of-interest (ROI setting up is conducted to improve the real-time capability. The present work contributes to real-time, accurate and robust features, compared with our previous works. Both offline and online experimental results validate the effectiveness and better performances of the proposed method against the traditional triangulation-based localization algorithm.

  8. Measurement of the geometric parameters of power contact wire based on binocular stereovision

    Science.gov (United States)

    Pan, Xue-Tao; Zhang, Ya-feng; Meng, Fei

    2010-10-01

    In the electrified railway power supply system, electric locomotive obtains power from the catenary's wire through the pantograph. Under the action of the pantograph, combined with various factors such as vibration, touch current, relative sliding speed, load, etc, the contact wire will produce mechanical wear and electrical wear. Thus, in electrified railway construction and daily operations, the geometric parameters such as line height, pull value, the width of wear surface must be under real-timely and non-contact detection. On the one hand, the safe operation of electric railways will be guaranteed; on the other hand, the wire endurance will be extended, and operating costs reduced. Based on the characteristics of the worn wires' image signal, the binocular stereo vision technology was applied for measurement of contact wire geometry parameters, a mathematical model of measurement of geometric parameters was derived, and the boundaries of the wound wire abrasion-point value were extracted by means of sub-pixel edge detection method based on the LOG operator with the least-squares fitting, thus measurements of the wire geometry parameters were realized. Principles were demonstrated through simulation experiments, and the experimental results show that the detection methods presented in this paper for measuring the accuracy, efficiency and convenience, etc. are close to or superior to the traditional measurements, which has laid a good foundation for the measurement system of geometric parameters for the contact wire of the development of binocular vision.

  9. The combined influence of binocular disparity and shading on pictorial shape

    NARCIS (Netherlands)

    Doorschot, P. C A; Kappers, A. M L; Koenderink, Jan J.

    The combined influence of binocular disparity and shading on pictorial shape was studied. Stimuli were several pairs of stereo photographs of real objects. The stereo base was 0, 7, or 14 cm, and the location of the light source was varied over three positions (one from about the viewpoint of the

  10. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    Science.gov (United States)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  11. A parallel stereo reconstruction algorithm with applications in entomology (APSRA)

    Science.gov (United States)

    Bhasin, Rajesh; Jang, Won Jun; Hart, John C.

    2012-03-01

    We propose a fast parallel algorithm for the reconstruction of 3-Dimensional point clouds of insects from binocular stereo image pairs using a hierarchical approach for disparity estimation. Entomologists study various features of insects to classify them, build their distribution maps, and discover genetic links between specimens among various other essential tasks. This information is important to the pesticide and the pharmaceutical industries among others. When considering the large collections of insects entomologists analyze, it becomes difficult to physically handle the entire collection and share the data with researchers across the world. With the method presented in our work, Entomologists can create an image database for their collections and use the 3D models for studying the shape and structure of the insects thus making it easier to maintain and share. Initial feedback shows that the reconstructed 3D models preserve the shape and size of the specimen. We further optimize our results to incorporate multiview stereo which produces better overall structure of the insects. Our main contribution is applying stereoscopic vision techniques to entomology to solve the problems faced by entomologists.

  12. [Binocular coordination during reading].

    Science.gov (United States)

    Bassou, L; Granié, M; Pugh, A K; Morucci, J P

    1992-01-01

    Is there an effect on binocular coordination during reading of oculomotor imbalance (heterophoria, strabismus and inadequate convergence) and of functional lateral characteristics (eye preference and perceptually privileged visual laterality)? Recordings of the binocular eye-movements of ten-year-old children show that oculomotor imbalances occur most often among children whose left visual perceptual channel is privileged, and that these subjects can present optomotor dissociation and manifest lack of motor coordination. Close binocular motor coordination is far from being the norm in reading. The faster reader displays saccades of differing spatial amplitude and the slower reader an oculomotor hyperactivity, especially during fixations. The recording of binocular movements in reading appears to be an excellent means of diagnosing difficulties related to visual laterality and to problems associated with oculomotor imbalance.

  13. Amblyopia and the binocular approach to its therapy.

    Science.gov (United States)

    Hess, Robert F; Thompson, Benjamin

    2015-09-01

    There is growing evidence that abnormal binocular interactions play a key role in amblyopia. In particular, stronger suppression of the amblyopic eye has been associated with poorer amblyopic eye visual acuity and a new therapy has been described that directly targets binocular function and has been found to improve both monocular and binocular vision in adults and children with amblyopia. Furthermore, non-invasive brain stimulation techniques that alter excitation and inhibition within the visual cortex have been shown to improve vision in the amblyopic eye. The aim of this review is to summarize this previous work and interpret the therapeutic effects of binocular therapy and non-invasive brain stimulation in the context of three potential neural mechanisms; active inhibition of signals from the amblyopic eye, attenuation of information from the amblyopic eye and metaplasticity of synaptic long term potentiation and long term depression. Copyright © 2015. Published by Elsevier Ltd.

  14. Real 3D increases perceived depth over anaglyphs but does not cancel stereo-anomaly

    NARCIS (Netherlands)

    Kooi, F.L.; Dekker, D.; Ee, R. van; Brouwer, A.-M.

    2010-01-01

    Background: About 30% of the population has difficulties detecting the sign and the magnitude of binocular disparity in the absence of eye movements, a phenomenon called stereo-anomaly. The stereo-anomaly tests so far are based on disparity only (e.g. red-green stereograms), which means that other

  15. Emotion and Interhemispheric Interactions in Binocular Rivalry

    Directory of Open Access Journals (Sweden)

    K L Ritchie

    2013-10-01

    Full Text Available Previous research has shown that fear-related stimuli presented in peripheral vision are preferentially processed over stimuli depicting other emotions. Furthermore, emotional content can influence dominance duration in binocular rivalry, with the period of dominance for an emotional image (e.g. a fearful face being significantly longer than a neutral image (e.g. a neutral face or a house. Experiment 1 of the current study combined these two ideas to investigate the role of emotion in binocular rivalry with face/house pairs viewed in the periphery. The results showed that faces were perceived as more dominant than houses, and fearful faces more so than neutral faces, even when viewed in the periphery. Experiment 2 extended this paradigm to present a rival pair in the periphery in each hemifield, with each eye either viewing the same stimulus in each location (traditional condition, or a different stimulus in each location (Diaz-Caneja condition. The results showed that the two pairs tended to rival in synchrony only in the traditional condition. Taken together, the results show that face dominance and emotion dominance in binocular rivalry persist in the periphery, and that interhemispheric interactions in binocular rivalry depend on an eye- as opposed to an object-based mechanism.

  16. Evaluation and development of a novel binocular treatment (I-BiT™) system using video clips and interactive games to improve vision in children with amblyopia ('lazy eye'): study protocol for a randomised controlled trial.

    Science.gov (United States)

    Foss, Alexander J; Gregson, Richard M; MacKeith, Daisy; Herbison, Nicola; Ash, Isabel M; Cobb, Sue V; Eastgate, Richard M; Hepburn, Trish; Vivian, Anthony; Moore, Diane; Haworth, Stephen M

    2013-05-20

    Amblyopia (lazy eye) affects the vision of approximately 2% of all children. Traditional treatment consists of wearing a patch over their 'good' eye for a number of hours daily, over several months. This treatment is unpopular and compliance is often low. Therefore results can be poor. A novel binocular treatment which uses 3D technology to present specially developed computer games and video footage (I-BiT™) has been studied in a small group of patients and has shown positive results over a short period of time. The system is therefore now being examined in a randomised clinical trial. Seventy-five patients aged between 4 and 8 years with a diagnosis of amblyopia will be randomised to one of three treatments with a ratio of 1:1:1 - I-BiT™ game, non-I-BiT™ game, and I-BiT™ DVD. They will be treated for 30 minutes once weekly for 6 weeks. Their visual acuity will be assessed independently at baseline, mid-treatment (week 3), at the end of treatment (week 6) and 4 weeks after completing treatment (week 10). The primary endpoint will be the change in visual acuity from baseline to the end of treatment. Secondary endpoints will be additional visual acuity measures, patient acceptability, compliance and the incidence of adverse events. This is the first randomised controlled trial using the I-BiT™ system. The results will determine if the I-BiT™ system is effective in the treatment of amblyopia and will also determine the optimal treatment for future development. ClinicalTrials.gov identifier: NCT01702727.

  17. Early Studies of Binocular and Binaural Directions

    Directory of Open Access Journals (Sweden)

    Nicholas J. Wade

    2018-03-01

    Full Text Available Understanding how the eyes work together to determine the direction of objects provided the impetus for examining integration of signals from the ears to locate sounds. However, the advantages of having two eyes were recorded long before those for two ears were appreciated. In part, this reflects the marked differences in how we can compare perception with one or two organs. It is easier to close one eye and examine monocular vision than to “close” one ear and study monaural hearing. Moreover, we can move our eyes either in the same or in opposite directions, but humans have no equivalent means of moving the ears in unison. Studies of binocular single vision can be traced back over two thousand years and they were implicitly concerned with visual directions from each eye. The location of any point in visual or auditory space can be described by specifying its direction and distance, from the vantage point of an observer. From the late 18th century experiments indicated that binocular direction involved an eye movement component and experimental studies of binaural direction commenced slightly later. However, these early binocular and binaural experiments were not incorporated into theoretical accounts until almost a century later. The early history of research on visual direction with two eyes is contrasted to that on auditory direction with two ears.

  18. First Peruvian binoculars

    Science.gov (United States)

    Baldwin, Guillermo; Gonzales, Franco; Pérez S., Carlos

    2017-11-01

    In Peru, as in almost all Latin America, precision optical industry is almost null. One reason is the scarcity of human and technological resources. But, a few years ago, a masters and diploma university program in optical engineering was started in our university: Pontificia Universidad Católica del Perú1 (PUCP) in Lima. Also, an optical shop on precision optics was implemented. Some students were trained at CIO in Leon, Mexico. In order to motivate optical business startups in Peru we planned to show some possibilities of optical devices fabrication trough doing prototypes. So, we started doing a small reflective telescope for moon observation2, 3, where mirror and ocular polishing and opto-mechanics had priority. Aluminum evaporation was included. Now, we do a new step developing a binocular, as we know, it never was made before in Peru. This work includes the binocular geometric optics and opto-mechanical designs, the ocular manufacturing, and the binocular characterization of an 8x35 binocular for amateur observation.

  19. Perceptual Relearning of Binocular Fusion and Stereoacuity After Brain Injury.

    Science.gov (United States)

    Schaadt, Anna-Katharina; Schmidt, Lena; Reinhart, Stefan; Adams, Michaela; Garbacenkaite, Ruta; Leonhardt, Eva; Kuhn, Caroline; Kerkhoff, Georg

    2014-06-01

    Brain lesions may disturb binocular fusion and stereopsis, leading to blurred vision, diplopia, and reduced binocular depth perception for which no evaluated treatment is currently available. Objective The study evaluated the effects of a novel binocular vision treatment designed to improve convergent fusional amplitude and stereoacuity in patients with stroke or traumatic brain injury (TBI). Methods Patients (20 in all: 11 with stroke, 9 with TBI) were tested in fusional convergence, stereoacuity, near/far visual acuity, accommodation, and subjective binocular reading time until diplopia emerged at 6 different time points. All participants were treated in a single subject baseline design, with 3 baseline assessments before treatment (pretherapy), an assessment immediately after a 6-week treatment period (posttherapy), and 2 follow-up tests 3 and 6 months after treatment. Patients received a novel fusion and dichoptic training using 3 different devices to slowly increase fusional and disparity angles. Results At pretherapy, the stroke and TBI groups showed severe impairments in convergent fusional range, stereoacuity, subjective reading duration, and partially in accommodation (only TBI group). After treatment, both groups showed considerable improvements in all these variables as well as slightly increased near visual acuity. No significant changes were observed during the pretherapy and follow-up periods, ruling out spontaneous recovery and demonstrating long-term stability of binocular treatment effects. Conclusions This proof-of-principle study indicates a substantial treatment-induced plasticity of the lesioned brain in the relearning of binocular fusion and stereovision, thus providing new, effective rehabilitation strategies to treat binocular vision deficits resulting from permanent visual cortical damage. © The Author(s) 2013.

  20. Change in vision, visual disability, and health after cataract surgery.

    Science.gov (United States)

    Helbostad, Jorunn L; Oedegaard, Maria; Lamb, Sarah E; Delbaere, Kim; Lord, Stephen R; Sletvold, Olav

    2013-04-01

    Cataract surgery improves vision and visual functioning; the effect on general health is not established. We investigated if vision, visual functioning, and general health follow the same trajectory of change the year after cataract surgery and if changes in vision explain changes in visual disability and general health. One-hundred forty-eight persons, with a mean (SD) age of 78.9 (5.0) years (70% bilateral surgery), were assessed before and 6 weeks and 12 months after surgery. Visual disability and general health were assessed by the CatQuest-9SF and the Short Formular-36. Corrected binocular visual acuity, visual field, stereo acuity, and contrast vision improved (P visual acuity evident up to 12 months (P = 0.034). Cataract surgery had an effect on visual disability 1 year later (P visual disability and general health 6 weeks after surgery. Vision improved and visual disability decreased in the year after surgery, whereas changes in general health and visual functioning were short-term effects. Lack of associations between changes in vision and self-reported disability and general health suggests that the degree of vision changes and self-reported health do not have a linear relationship.

  1. A new form of rapid binocular plasticity in adult with amblyopia

    OpenAIRE

    Zhou, Jiawei; Thompson, Benjamin; Hess, Robert F.

    2013-01-01

    Amblyopia is a neurological disorder of binocular vision affecting up to 3% of the population resulting from a disrupted period of early visual development. Recently, it has been shown that vision can be partially restored by intensive monocular or dichoptic training (4?6 weeks). This can occur even in adults owing to a residual degree of brain plasticity initiated by repetitive and successive sensory stimulation. Here we show that the binocular imbalance that characterizes amblyopia can be r...

  2. A new form of rapid binocular plasticity in adult with amblyopia.

    Science.gov (United States)

    Zhou, Jiawei; Thompson, Benjamin; Hess, Robert F

    2013-01-01

    Amblyopia is a neurological disorder of binocular vision affecting up to 3% of the population resulting from a disrupted period of early visual development. Recently, it has been shown that vision can be partially restored by intensive monocular or dichoptic training (4-6 weeks). This can occur even in adults owing to a residual degree of brain plasticity initiated by repetitive and successive sensory stimulation. Here we show that the binocular imbalance that characterizes amblyopia can be reduced by occluding the amblyopic eye with a translucent patch for as little as 2.5 hours, suggesting a degree of rapid binocular plasticity in adults resulting from a lack of sensory stimulation. The integrated binocular benefit is larger in our amblyopic group than in our normal control group. We propose that this rapid improvement in function, as a result of reduced sensory stimulation, represents a new form of plasticity operating at a binocular site.

  3. A binocular approach to treating amblyopia: antisuppression therapy.

    Science.gov (United States)

    Hess, Robert F; Mansouri, Behzad; Thompson, Benjamin

    2010-09-01

    We developed a binocular treatment for amblyopia based on antisuppression therapy. A novel procedure is outlined for measuring the extent to which the fixing eye suppresses the fellow amblyopic eye. We hypothesize that suppression renders a structurally binocular system, functionally monocular. We demonstrate using three strabismic amblyopes that information can be combined normally between their eyes under viewing conditions where suppression is reduced. Also, we show that prolonged periods of viewing (under the artificial conditions of stimuli of different contrast in each eye) during which information from the two eyes is combined leads to a strengthening of binocular vision in such cases and eventual combination of binocular information under natural viewing conditions (stimuli of the same contrast in each eye). Concomitant improvement in monocular acuity of the amblyopic eye occurs with this reduction in suppression and strengthening of binocular fusion. Furthermore, in each of the three cases, stereoscopic function is established. This provides the basis for a new treatment of amblyopia, one that is purely binocular and aimed at reducing suppression as a first step.

  4. [Binocular functions in amblyopia and strabismus].

    Science.gov (United States)

    Awaya, S; Sato, M; Tsuzuki, K; Takara, T; Hiraiwa, S; Ota, K; Arai, M; Yoshida, M; Miyake, Y; Terasaki, H; Horiguchi, M; Hirano, K; Hirose, H; Uno, Y; Suzuki, Y; Iwata, M; Takai, Y; Maeda, M; Hisano, S; Kawakita, T; Omura, T; Ota, Y; Kondo, N; Takashi, A; Kawakami, O

    1997-12-01

    Regarding the changing trends in the concept, definition, etiological classification, and criteria for diagnosis of amblyopia, we reviewed a total of 4,693 cases of amblyopia seen during the past 37 years. The amblyopia was divided into four types: strabismic, anisometropic, ametropic, and form vision deprivative. There was a definite trend for the incidence to decrease and for the diagnosis to be made during earlier age in recent years. Although favorable recovery of visual acuity is obtained after treatment of amblyopia and strabismus, there are difficulties in obtaining good binocular functions in early-onset amblyopia and strabismus. This feature was evaluated in regard to motion perception asymmetry (MPA) and binocular depth from motion (DFM). Many cases of early-onset amblyopia and strabismus showed no disparity stereopsis, or position stereopsis, in spite of the presence of DFM. The MPA appeared to be closely related to early-onset esotropia regardless of age, while it disappeared and motion perception became symmetric 4 to 5 months after birth in normal infants. The DFM seemed to play an important role in maintaining good motor alignment for several years after surgery. I developed a checkerboard pattern stimulator in 1978. This method proved to be useful in developing binocular functions and motor alignment by applying simultaneous bifoveolar stimulation and anti-suppression. Extensive exposure to the stimulation was essential for therapeutic success.

  5. Multiocular image sensor with on-chip beam-splitter and inner meta-micro-lens for single-main-lens stereo camera.

    Science.gov (United States)

    Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa

    2016-08-08

    We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.

  6. A binocular iPad treatment for amblyopic children.

    Science.gov (United States)

    Li, S L; Jost, R M; Morale, S E; Stager, D R; Dao, L; Stager, D; Birch, E E

    2014-10-01

    Monocular amblyopia treatment (patching or penalization) does not always result in 6/6 vision and amblyopia often recurs. As amblyopia arises from abnormal binocular visual experience, we evaluated the effectiveness of a novel home-based binocular amblyopia treatment. Children (4-12 y) wore anaglyphic glasses to play binocular games on an iPad platform for 4 h/w for 4 weeks. The first 25 children were assigned to sham games and then 50 children to binocular games. Children in the binocular group had the option of participating for an additional 4 weeks. Compliance was monitored with calendars and tracking fellow eye contrast settings. About half of the children in each group were also treated with patching at a different time of day. Best-corrected visual acuity, suppression, and stereoacuity were measured at baseline, at the 4- and 8-week outcome visits, and 3 months after cessation of treatment. Mean (±SE) visual acuity improved in the binocular group from 0.47±0.03 logMAR at baseline to 0.39±0.03 logMAR at 4 weeks (P<0.001); there was no significant change for the sham group. The effect of binocular games on visual acuity did not differ for children who were patched vs those who were not. The median stereoacuity remained unchanged in both groups. An additional 4 weeks of treatment did not yield additional visual acuity improvement. Visual acuity improvements were maintained for 3 months after the cessation of treatment. Binocular iPad treatment rapidly improved visual acuity, and visual acuity was stable for at least 3 months following the cessation of treatment.

  7. Stereo pair design for cameras with a fovea

    Science.gov (United States)

    Chettri, Samir R.; Keefe, Michael; Zimmerman, John R.

    1992-01-01

    We describe the methodology for the design and selection of a stereo pair when the cameras have a greater concentration of sensing elements in the center of the image plane (fovea). Binocular vision is important for the purpose of depth estimation, which in turn is important in a variety of applications such as gaging and autonomous vehicle guidance. We assume that one camera has square pixels of size dv and the other has pixels of size rdv, where r is between 0 and 1. We then derive results for the average error, the maximum error, and the error distribution in the depth determination of a point. These results can be shown to be a general form of the results for the case when the cameras have equal sized pixels. We discuss the behavior of the depth estimation error as we vary r and the tradeoffs between the extra processing time and increased accuracy. Knowing these results makes it possible to study the case when we have a pair of cameras with a fovea.

  8. Calibration of a Stereo Radiation Detection Camera Using Planar Homography

    Directory of Open Access Journals (Sweden)

    Seung-Hae Baek

    2016-01-01

    Full Text Available This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.

  9. Virtual-stereo fringe reflection technique for specular free-form surface testing

    Science.gov (United States)

    Ma, Suodong; Li, Bo

    2016-11-01

    Due to their excellent ability to improve the performance of optical systems, free-form optics have attracted extensive interest in many fields, e.g. optical design of astronomical telescopes, laser beam expanders, spectral imagers, etc. However, compared with traditional simple ones, testing for such kind of optics is usually more complex and difficult which has been being a big barrier for the manufacture and the application of these optics. Fortunately, owing to the rapid development of electronic devices and computer vision technology, fringe reflection technique (FRT) with advantages of simple system structure, high measurement accuracy and large dynamic range is becoming a powerful tool for specular free-form surface testing. In order to obtain absolute surface shape distributions of test objects, two or more cameras are often required in the conventional FRT which makes the system structure more complex and the measurement cost much higher. Furthermore, high precision synchronization between each camera is also a troublesome issue. To overcome the aforementioned drawback, a virtual-stereo FRT for specular free-form surface testing is put forward in this paper. It is able to achieve absolute profiles with the help of only one single biprism and a camera meanwhile avoiding the problems of stereo FRT based on binocular or multi-ocular cameras. Preliminary experimental results demonstrate the feasibility of the proposed technique.

  10. New insights into amblyopia: binocular therapy and noninvasive brain stimulation.

    Science.gov (United States)

    Hess, Robert F; Thompson, Benjamin

    2013-02-01

    The current approach to the treatment of amblyopia is problematic for a number of reasons. First, it promotes recovery of monocular vision but because it is not designed to promote binocularity, its binocular outcomes often are disappointing. Second, compliance is poor and variable. Third, the effectiveness of the treatment is thought to decrease with increasing age. We discuss 2 new approaches aimed at recovering visual function in adults with amblyopia. The first is a binocular approach to amblyopia treatment that is showing promise in initial clinical studies. The second is still in development and involves the use of well-established noninvasive brain stimulation techniques to temporarily alter the balance of excitation and inhibition in the visual cortex. Copyright © 2013 American Association for Pediatric Ophthalmology and Strabismus. Published by Mosby, Inc. All rights reserved.

  11. Robot Vision Library

    Science.gov (United States)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  12. Stereoscopic vision in the absence of the lateral occipital cortex.

    Directory of Open Access Journals (Sweden)

    Jenny C A Read

    2010-09-01

    Full Text Available Both dorsal and ventral cortical visual streams contain neurons sensitive to binocular disparities, but the two streams may underlie different aspects of stereoscopic vision. Here we investigate stereopsis in the neurological patient D.F., whose ventral stream, specifically lateral occipital cortex, has been damaged bilaterally, causing profound visual form agnosia. Despite her severe damage to cortical visual areas, we report that DF's stereo vision is strikingly unimpaired. She is better than many control observers at using binocular disparity to judge whether an isolated object appears near or far, and to resolve ambiguous structure-from-motion. DF is, however, poor at using relative disparity between features at different locations across the visual field. This may stem from a difficulty in identifying the surface boundaries where relative disparity is available. We suggest that the ventral processing stream may play a critical role in enabling healthy observers to extract fine depth information from relative disparities within one surface or between surfaces located in different parts of the visual field.

  13. Stereo Matching Based On Election Campaign Algorithm

    Directory of Open Access Journals (Sweden)

    Xie Qing Hua

    2016-01-01

    Full Text Available Stereo matching is one of the significant problems in the study of the computer vision. By getting the distance information through pixels, it is possible to reproduce a three-dimensional stereo. In this paper, the edges are the primitives for matching, the grey values of the edges and the magnitude and direction of the edge gradient were figured out as the properties of the edge feature points, according to the constraints for stereo matching, the energy function was built for finding the route minimizing by election campaign optimization algorithm during the process of stereo matching was applied to this problem the energy function. Experiment results show that this algorithm is more stable and it can get the matching result with better accuracy.

  14. Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors.

    Science.gov (United States)

    Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin

    2018-04-03

    Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison.

  15. Binocular treatment of amblyopia using videogames (BRAVO): study protocol for a randomised controlled trial

    OpenAIRE

    Guo, Cindy X.; Babu, Raiju J.; Black, Joanna M.; Bobier, William R.; Lam, Carly S. Y.; Dai, Shuan; Gao, Tina Y.; Hess, Robert F.; Jenkins, Michelle; Jiang, Yannan; Kowal, Lionel; Parag, Varsha; South, Jayshree; Staffieri, Sandra Elfride; Walker, Natalie

    2016-01-01

    Background Amblyopia is a common neurodevelopmental disorder of vision that is characterised by visual impairment in one eye and compromised binocular visual function. Existing evidence-based treatments for children include patching the nonamblyopic eye to encourage use of the amblyopic eye. Currently there are no widely accepted treatments available for adults with amblyopia. The aim of this trial is to assess the efficacy of a new binocular, videogame-based treatment for amblyopia in older ...

  16. The large binocular telescope.

    Science.gov (United States)

    Hill, John M

    2010-06-01

    The Large Binocular Telescope (LBT) Observatory is a collaboration among institutions in Arizona, Germany, Italy, Indiana, Minnesota, Ohio, and Virginia. The telescope on Mount Graham in Southeastern Arizona uses two 8.4 m diameter primary mirrors mounted side by side. A unique feature of the LBT is that the light from the two Gregorian telescope sides can be combined to produce phased-array imaging of an extended field. This cophased imaging along with adaptive optics gives the telescope the diffraction-limited resolution of a 22.65 m aperture and a collecting area equivalent to an 11.8 m circular aperture. This paper describes the design, construction, and commissioning of this unique telescope. We report some sample astronomical results with the prime focus cameras. We comment on some of the technical challenges and solutions. The telescope uses two F/15 adaptive secondaries to correct atmospheric turbulence. The first of these adaptive mirrors has completed final system testing in Firenze, Italy, and is planned to be at the telescope by Spring 2010.

  17. Association between fine motor skills and binocular visual function in children with reading difficulties.

    Science.gov (United States)

    Niechwiej-Szwedo, Ewa; Alramis, Fatimah; Christian, Lisa W

    2017-12-01

    Performance of fine motor skills (FMS) assessed by a clinical test battery has been associated with reading achievement in school-age children. However, the nature of this association remains to be established. The aim of this study was to assess FMS in children with reading difficulties using two experimental tasks, and to determine if performance is associated with reduced binocular function. We hypothesized that in comparison to an age- and sex-matched control group, children identified with reading difficulties will perform worse only on a motor task that has been shown to rely on binocular input. To test this hypothesis, motor performance was assessed using two tasks: bead-threading and peg-board in 19 children who were reading below expected grade and age-level. Binocular vision assessment included tests for stereoacuity, fusional vergence, amplitude of accommodation, and accommodative facility. In comparison to the control group, children with reading difficulties performed significantly worse on the bead-threading task. In contrast, performance on the peg-board task was similar in both groups. Accommodative facility was the only measure of binocular function significantly associated with motor performance. Findings from our exploratory study suggest that normal binocular vision may provide an important sensory input for the optimal development of FMS and reading. Given the small sample size tested in the current study, further investigation to assess the contribution of binocular vision to the development and performance of FMS and reading is warranted. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Latent binocular function in amblyopia.

    Science.gov (United States)

    Chadnova, Eva; Reynaud, Alexandre; Clavagnier, Simon; Hess, Robert F

    2017-11-01

    Recently, psychophysical studies have shown that humans with amblyopia do have binocular function that is not normally revealed due to dominant suppressive interactions under normal viewing conditions. Here we use magnetoencephalography (MEG) combined with dichoptic visual stimulation to investigate the underlying binocular function in humans with amblyopia for stimuli that, because of their temporal properties, would be expected to bypass suppressive effects and to reveal any underlying binocular function. We recorded contrast response functions in visual cortical area V1 of amblyopes and normal observers using a steady state visually evoked responses (SSVER) protocol. We used stimuli that were frequency-tagged at 4Hz and 6Hz that allowed identification of the responses from each eye and were of a sufficiently high temporal frequency (>3Hz) to bypass suppression. To characterize binocular function, we compared dichoptic masking between the two eyes in normal and amblyopic participants as well as interocular phase differences in the two groups. We observed that the primary visual cortex responds less to the stimulation of the amblyopic eye compared to the fellow eye. The pattern of interaction in the amblyopic visual system however was not significantly different between the amblyopic and fellow eyes. However, the amblyopic suppressive interactions were lower than those observed in the binocular system of our normal observers. Furthermore, we identified an interocular processing delay of approximately 20ms in our amblyopic group. To conclude, when suppression is greatly reduced, such as the case with our stimulation above 3Hz, the amblyopic visual system exhibits a lack of binocular interactions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. DLP™-based dichoptic vision test system

    Science.gov (United States)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  20. Perceptual relearning of binocular fusion after hypoxic brain damage: four controlled single-case treatment studies.

    Science.gov (United States)

    Schaadt, Anna-Katharina; Schmidt, Lena; Kuhn, Caroline; Summ, Miriam; Adams, Michaela; Garbacenkaite, Ruta; Leonhardt, Eva; Reinhart, Stefan; Kerkhoff, Georg

    2014-05-01

    Hypoxic brain damage is characterized by widespread, diffuse-disseminated brain lesions, which may cause severe disturbances in binocular vision, leading to diplopia and loss of stereopsis, for which no evaluated treatment is currently available. The study evaluated the effects of a novel binocular vision treatment designed to improve binocular fusion and stereopsis as well as to reduce diplopia in patients with cerebral hypoxia. Four patients with severely reduced convergent fusion, stereopsis, and reading duration due to hypoxic brain damage were treated in a single-subject baseline design, with three baseline assessments before treatment to control for spontaneous recovery (pretherapy), an assessment immediately after a treatment period of 6 weeks (posttherapy), and two follow-up tests 3 and 6 months after treatment to assess stability of improvements. Patients received a novel fusion and dichoptic training using 3 different devices designed to slowly increase fusional and disparity angle. After the treatment, all 4 patients improved significantly in binocular fusion, subjective reading duration until diplopia emerged, and 2 of 4 patients improved significantly in local stereopsis. No significant changes were observed during the pretherapy baseline period and the follow-up period, thus ruling out spontaneous recovery and demonstrating long-term stability of treatment effects. This proof-of-principle study indicates a substantial treatment-induced plasticity after hypoxia in the relearning of binocular vision and offers a viable treatment option. Moreover, it provides new hope and direction for the development of effective rehabilitation strategies to treat neurovisual deficits resulting from hypoxic brain damage.

  1. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  2. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    International Nuclear Information System (INIS)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  3. A new binocular approach to the treatment of amblyopia in adults well beyond the critical period of visual development.

    Science.gov (United States)

    Hess, R F; Mansouri, B; Thompson, B

    2010-01-01

    The present treatments for amblyopia are predominantly monocular aiming to improve the vision in the amblyopic eye through either patching of the fellow fixing eye or visual training of the amblyopic eye. This approach is problematic, not least of which because it rarely results in establishment of binocular function. Recently it has shown that amblyopes possess binocular cortical mechanisms for both threshold and suprathreshold stimuli. We outline a novel procedure for measuring the extent to which the fixing eye suppresses the fellow amblyopic eye, rendering what is a structurally binocular system, functionally monocular. Here we show that prolonged periods of viewing (under the artificial conditions of stimuli of different contrast in each eye) during which information from the two eyes is combined leads to a strengthening of binocular vision in strabismic amblyopes and eventual combination of binocular information under natural viewing conditions (stimuli of the same contrast in each eye). Concomitant improvement in monocular acuity of the amblyopic eye occurs with this reduction in suppression and strengthening of binocular fusion. Furthermore, in a majority of patients tested, stereoscopic function is established. This provides the basis for a new treatment of amblyopia, one that is purely binocular and aimed at reducing suppression as a first step.

  4. JAVA Stereo Display Toolkit

    Science.gov (United States)

    Edmonds, Karina

    2008-01-01

    This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.

  5. Development and matching of binocular orientation preference in mouse V1.

    Science.gov (United States)

    Bhaumik, Basabi; Shah, Nishal P

    2014-01-01

    Eye-specific thalamic inputs converge in the primary visual cortex (V1) and form the basis of binocular vision. For normal binocular perceptions, such as depth and stereopsis, binocularly matched orientation preference between the two eyes is required. A critical period of binocular matching of orientation preference in mice during normal development is reported in literature. Using a reaction diffusion model we present the development of RF and orientation selectivity in mouse V1 and investigate the binocular orientation preference matching during the critical period. At the onset of the critical period the preferred orientations of the modeled cells are mostly mismatched in the two eyes and the mismatch decreases and reaches levels reported in juvenile mouse by the end of the critical period. At the end of critical period 39% of cells in binocular zone in our model cortex is orientation selective. In literature around 40% cortical cells are reported as orientation selective in mouse V1. The starting and the closing time for critical period determine the orientation preference alignment between the two eyes and orientation tuning in cortical cells. The absence of near neighbor interaction among cortical cells during the development of thalamo-cortical wiring causes a salt and pepper organization in the orientation preference map in mice. It also results in much lower % of orientation selective cells in mice as compared to ferrets and cats having organized orientation maps with pinwheels.

  6. Development and Matching of Binocular Orientation Preference in Mouse V1

    Directory of Open Access Journals (Sweden)

    Basabi eBhaumik

    2014-07-01

    Full Text Available Eye-specific thalamic inputs converge in the primary visual cortex (V1 and form the basis of binocular vision. For normal binocular perceptions, such as depth and stereopsis, binocularly matched orientation preference between the two eyes is required. A critical period of binocular matching of orientation preference in mice during normal development is reported in literature. Using a reaction diffusion model we present the development of RF and orientation selectivity in mouse V1 and investigate the binocular orientation preference matching during the critical period. At the onset of the critical period the preferred orientations of the modeled cells are mostly mismatched in the two eyes and the mismatch decreases and reaches levels reported in juvenile mouse by the end of the critical period. At the end of critical period 39% of cells in binocular zone in our model cortex is orientation selective. In literature around 40% cortical cells are reported as orientation selective in mouse V1. The starting and the closing time for critical period determine the orientation preference alignment between the two eyes and orientation tuning in cortical cells. The absence of near neighbor interaction among cortical cells during the development of thalmo-cortical wiring causes a salt and pepper organization in the orientation preference map in mice. It also results in much lower % of orientation selective cells in mice as compared to ferrets and cats having organized orientation maps with pinwheels.

  7. A special role for binocular visual input during development and as a component of occlusion therapy for treatment of amblyopia.

    Science.gov (United States)

    Mitchell, Donald E

    2008-01-01

    To review work on animal models of deprivation amblyopia that points to a special role for binocular visual input in the development of spatial vision and as a component of occlusion (patching) therapy for amblyopia. The studies reviewed employ behavioural methods to measure the effects of various early experiential manipulations on the development of the visual acuity of the two eyes. Short periods of concordant binocular input, if continuous, can offset much longer daily periods of monocular deprivation to allow the development of normal visual acuity in both eyes. It appears that the visual system does not weigh all visual input equally in terms of its ability to impact on the development of vision but instead places greater weight on concordant binocular exposure. Experimental models of patching therapy for amblyopia imposed on animals in which amblyopia had been induced by a prior period of early monocular deprivation, indicate that the benefits of patching therapy may be only temporary and decline rapidly after patching is discontinued. However, when combined with critical amounts of binocular visual input each day, the benefits of patching can be both heightened and made permanent. Taken together with demonstrations of retained binocular connections in the visual cortex of monocularly deprived animals, a strong argument is made for inclusion of specific training of stereoscopic vision for part of the daily periods of binocular exposure that should be incorporated as part of any patching protocol for amblyopia.

  8. Augmented reality glass-free three-dimensional display with the stereo camera

    Science.gov (United States)

    Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.

  9. On the contribution of binocular disparity to the long-term memory for natural scenes.

    Directory of Open Access Journals (Sweden)

    Matteo Valsecchi

    Full Text Available Binocular disparity is a fundamental dimension defining the input we receive from the visual world, along with luminance and chromaticity. In a memory task involving images of natural scenes we investigate whether binocular disparity enhances long-term visual memory. We found that forest images studied in the presence of disparity for relatively long times (7s were remembered better as compared to 2D presentation. This enhancement was not evident for other categories of pictures, such as images containing cars and houses, which are mostly identified by the presence of distinctive artifacts rather than by their spatial layout. Evidence from a further experiment indicates that observers do not retain a trace of stereo presentation in long-term memory.

  10. The Role of Binocular Disparity in Rapid Scene and Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Matteo Valsecchi

    2013-04-01

    Full Text Available We investigated the contribution of binocular disparity to the rapid recognition of scenes and simpler spatial patterns using a paradigm combining backward masked stimulus presentation and short-term match-to-sample recognition. First, we showed that binocular disparity did not contribute significantly to the recognition of briefly presented natural and artificial scenes, even when the availability of monocular cues was reduced. Subsequently, using dense random dot stereograms as stimuli, we showed that observers were in principle able to extract spatial patterns defined only by disparity under brief, masked presentations. Comparing our results with the predictions from a cue-summation model, we showed that combining disparity with luminance did not per se disrupt the processing of disparity. Our results suggest that the rapid recognition of scenes is mediated mostly by a monocular comparison of the images, although we can rely on stereo in fast pattern recognition.

  11. Hybrid-Based Dense Stereo Matching

    Science.gov (United States)

    Chuang, T. Y.; Ting, H. W.; Jaw, J. J.

    2016-06-01

    Stereo matching generating accurate and dense disparity maps is an indispensable technique for 3D exploitation of imagery in the fields of Computer vision and Photogrammetry. Although numerous solutions and advances have been proposed in the literature, occlusions, disparity discontinuities, sparse texture, image distortion, and illumination changes still lead to problematic issues and await better treatment. In this paper, a hybrid-based method based on semi-global matching is presented to tackle the challenges on dense stereo matching. To ease the sensitiveness of SGM cost aggregation towards penalty parameters, a formal way to provide proper penalty estimates is proposed. To this end, the study manipulates a shape-adaptive cross-based matching with an edge constraint to generate an initial disparity map for penalty estimation. Image edges, indicating the potential locations of occlusions as well as disparity discontinuities, are approved by the edge drawing algorithm to ensure the local support regions not to cover significant disparity changes. Besides, an additional penalty parameter 𝑃𝑒 is imposed onto the energy function of SGM cost aggregation to specifically handle edge pixels. Furthermore, the final disparities of edge pixels are found by weighting both values derived from the SGM cost aggregation and the U-SURF matching, providing more reliable estimates at disparity discontinuity areas. Evaluations on Middlebury stereo benchmarks demonstrate satisfactory performance and reveal the potency of the hybrid-based dense stereo matching method.

  12. A study on the effect of different image centres on stereo triangulation accuracy

    CSIR Research Space (South Africa)

    De Villiers, J

    2015-11-01

    Full Text Available This paper evaluates the effect of mixing the distortion centre, principal point and arithmetic image centre on the distortion correction, focal length determination and resulting real-world stereo vision triangulation. A robotic arm is used...

  13. Digital stereoscopic photography using StereoData Maker

    Science.gov (United States)

    Toeppen, John; Sykes, David

    2009-02-01

    Stereoscopic digital photography has become much more practical with the use of USB wired connections between a pair of Canon cameras using StereoData Maker software for precise synchronization. StereoPhoto Maker software is now used to automatically combine and align right and left image files to produce a stereo pair. Side by side images are saved as pairs and may be viewed using software that converts the images into the preferred viewing format at the time of display. Stereo images may be shared on the internet, displayed on computer monitors, autostereo displays, viewed on high definition 3D TVs, or projected for a group. Stereo photographers are now free to control composition using point and shoot settings, or are able to control shutter speed, aperture, focus, ISO, and zoom. The quality of the output depends on the developed skills of the photographer as well as their understanding of the software, human vision and the geometry they choose for their cameras and subjects. Observers of digital stereo images can zoom in for greater detail and scroll across large panoramic fields with a few keystrokes. The art, science, and methods of taking, creating and viewing digital stereo photos are presented in a historic and developmental context in this paper.

  14. Acquisition of stereo panoramas for display in VR environments

    KAUST Repository

    Ainsworth, Richard A.

    2011-01-23

    Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods described here combine high-resolution photography with surround vision and full stereo view in an immersive environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented in all directions. The interocular distance, as seen from the viewer\\'s perspective, displaces the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking, even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo panoramas created with this acquisition and display technique can be applied without modification to a large array of VR devices having different screen arrangements and different VR libraries.

  15. Stereo and IMU-Assisted Visual Odometry for Small Robots

    Science.gov (United States)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  16. Analysis on detection accuracy of binocular photoelectric instrument optical axis parallelism digital calibration instrument

    Science.gov (United States)

    Ying, Jia-ju; Yin, Jian-ling; Wu, Dong-sheng; Liu, Jie; Chen, Yu-dan

    2017-11-01

    Low-light level night vision device and thermal infrared imaging binocular photoelectric instrument are used widely. The maladjustment of binocular instrument ocular axises parallelism will cause the observer the symptom such as dizziness, nausea, when use for a long time. Binocular photoelectric equipment digital calibration instrument is developed for detecting ocular axises parallelism. And the quantitative value of optical axis deviation can be quantitatively measured. As a testing instrument, the precision must be much higher than the standard of test instrument. Analyzes the factors that influence the accuracy of detection. Factors exist in each testing process link which affect the precision of the detecting instrument. They can be divided into two categories, one category is factors which directly affect the position of reticle image, the other category is factors which affect the calculation the center of reticle image. And the Synthesize error is calculated out. And further distribute the errors reasonably to ensure the accuracy of calibration instruments.

  17. SAD-Based Stereo Matching Using FPGAs

    Science.gov (United States)

    Ambrosch, Kristian; Humenberger, Martin; Kubinger, Wilfried; Steininger, Andreas

    In this chapter we present a field-programmable gate array (FPGA) based stereo matching architecture. This architecture uses the sum of absolute differences (SAD) algorithm and is targeted at automotive and robotics applications. The disparity maps are calculated using 450×375 input images and a disparity range of up to 150 pixels. We discuss two different implementation approaches for the SAD and analyze their resource usage. Furthermore, block sizes ranging from 3×3 up to 11×11 and their impact on the consumed logic elements as well as on the disparity map quality are discussed. The stereo matching architecture enables a frame rate of up to 600 fps by calculating the data in a highly parallel and pipelined fashion. This way, a software solution optimized by using Intel's Open Source Computer Vision Library running on an Intel Pentium 4 with 3 GHz clock frequency is outperformed by a factor of 400.

  18. Binocular contrast discrimination needs monocular multiplicative noise

    Science.gov (United States)

    Ding, Jian; Levi, Dennis M.

    2016-01-01

    The effects of signal and noise on contrast discrimination are difficult to separate because of a singularity in the signal-detection-theory model of two-alternative forced-choice contrast discrimination (Katkov, Tsodyks, & Sagi, 2006). In this article, we show that it is possible to eliminate the singularity by combining that model with a binocular combination model to fit monocular, dichoptic, and binocular contrast discrimination. We performed three experiments using identical stimuli to measure the perceived phase, perceived contrast, and contrast discrimination of a cyclopean sine wave. In the absence of a fixation point, we found a binocular advantage in contrast discrimination both at low contrasts (discrimination mechanisms: a nonlinear contrast transducer and multiplicative noise (MN). A binocular combination model (the DSKL model; Ding, Klein, & Levi, 2013b) was first fitted to both the perceived-phase and the perceived-contrast data sets, then combined with either the nonlinear contrast transducer or the MN mechanism to fit the contrast-discrimination data. We found that the best model combined the DSKL model with early MN. Model simulations showed that, after going through interocular suppression, the uncorrelated noise in the two eyes became anticorrelated, resulting in less binocular noise and therefore a binocular advantage in the discrimination task. Combining a nonlinear contrast transducer or MN with a binocular combination model (DSKL) provides a powerful method for evaluating the two putative contrast-discrimination mechanisms. PMID:26982370

  19. Binocular Combination of Second-Order Stimuli

    Science.gov (United States)

    Zhou, Jiawei; Liu, Rong; Zhou, Yifeng; Hess, Robert F.

    2014-01-01

    Phase information is a fundamental aspect of visual stimuli. However, the nature of the binocular combination of stimuli defined by modulations in contrast, so-called second-order stimuli, is presently not clear. To address this issue, we measured binocular combination for first- (luminance modulated) and second-order (contrast modulated) stimuli using a binocular phase combination paradigm in seven normal adults. We found that the binocular perceived phase of second-order gratings depends on the interocular signal ratio as has been previously shown for their first order counterparts; the interocular signal ratios when the two eyes were balanced was close to 1 in both first- and second-order phase combinations. However, second-order combination is more linear than previously found for first-order combination. Furthermore, binocular combination of second-order stimuli was similar regardless of whether the carriers in the two eyes were correlated, anti-correlated, or uncorrelated. This suggests that, in normal adults, the binocular phase combination of second-order stimuli occurs after the monocular extracting of the second-order modulations. The sensory balance associated with this second-order combination can be obtained from binocular phase combination measurements. PMID:24404180

  20. A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig

    Science.gov (United States)

    Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming

    2018-06-01

    This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.

  1. Contrast masking in strabismic amblyopia: attenuation, noise, interocular suppression and binocular summation.

    Science.gov (United States)

    Baker, Daniel H; Meese, Tim S; Hess, Robert F

    2008-07-01

    To investigate amblyopic contrast vision at threshold and above we performed pedestal-masking (contrast discrimination) experiments with a group of eight strabismic amblyopes using horizontal sinusoidal gratings (mainly 3c/deg) in monocular, binocular and dichoptic configurations balanced across eye (i.e. five conditions). With some exceptions in some observers, the four main results were as follows. (1) For the monocular and dichoptic conditions, sensitivity was less in the amblyopic eye than in the good eye at all mask contrasts. (2) Binocular and monocular dipper functions superimposed in the good eye. (3) Monocular masking functions had a normal dipper shape in the good eye, but facilitation was diminished in the amblyopic eye. (4) A less consistent result was normal facilitation in dichoptic masking when testing the good eye, but a loss of this when testing the amblyopic eye. This pattern of amblyopic results was replicated in a normal observer by placing a neutral density filter in front of one eye. The two-stage model of binocular contrast gain control [Meese, T.S., Georgeson, M.A. & Baker, D.H. (2006). Binocular contrast vision at and above threshold. Journal of Vision 6, 1224-1243.] was 'lesioned' in several ways to assess the form of the amblyopic deficit. The most successful model involves attenuation of signal and an increase in noise in the amblyopic eye, and intact stages of interocular suppression and binocular summation. This implies a behavioural influence from monocular noise in the amblyopic visual system as well as in normal observers with an ND filter over one eye.

  2. Multiview photometric stereo.

    Science.gov (United States)

    Hernández Esteban, Carlos; Vogiatzis, George; Cipolla, Roberto

    2008-03-01

    This paper addresses the problem of obtaining complete, detailed reconstructions of textureless shiny objects. We present an algorithm which uses silhouettes of the object, as well as images obtained under changing illumination conditions. In contrast with previous photometric stereo techniques, ours is not limited to a single viewpoint but produces accurate reconstructions in full 3D. A number of images of the object are obtained from multiple viewpoints, under varying lighting conditions. Starting from the silhouettes, the algorithm recovers camera motion and constructs the object's visual hull. This is then used to recover the illumination and initialise a multi-view photometric stereo scheme to obtain a closed surface reconstruction. There are two main contributions in this paper: Firstly we describe a robust technique to estimate light directions and intensities and secondly, we introduce a novel formulation of photometric stereo which combines multiple viewpoints and hence allows closed surface reconstructions. The algorithm has been implemented as a practical model acquisition system. Here, a quantitative evaluation of the algorithm on synthetic data is presented together with complete reconstructions of challenging real objects. Finally, we show experimentally how even in the case of highly textured objects, this technique can greatly improve on correspondence-based multi-view stereo results.

  3. Combining Motion-Induced Blindness with Binocular Rivalry

    Directory of Open Access Journals (Sweden)

    K Jaworska

    2011-04-01

    Full Text Available Motion-induced blindness (MIB and binocular rivalry (BR are examples of multistable phenomena in which our perception varies despite constant retinal input. It has been suggested that both phenomena are related and share a common underlying mechanism. We tried to determine whether experimental manipulations of the target dot and the mask systematically affect MIB and BR in an experimental paradigm that can elicit both phenomena. Eighteen observers fixated the center of a split-screen stereo display that consisted of a distracter mask and a superimposed target dot with different colour (isoluminant Red/Green in corresponding peripheral areas of the left and right eye. Observers reported perceived colour and disappearance of the target dot by pressing and releasing corresponding keys. In a within-subjects design the mask was presented in rivalry or not—with orthogonal drift in the left and right eye or with the same drift in both eyes. In control conditions the mask remained stationary. In addition, the size of the target dot was varied (small, medium, and large. Our results suggest that MIB measured by normalized frequency and duration of target disappearance and BR measured by normalized frequency and duration of colour reversals of the target were both affected by motion in the mask. Surprisingly, binocular rivalry in the mask had only a small effect on BR of the target and virtually no effect on MIB. The overall pattern of normalized MIB and BR measures, however, differed across experimental conditions. In conclusion, the results show some degree of dissociation between MIB and BR. Further analyses will inform whether or not the two phenomena occur independently of each other.

  4. Binocular rivalry produced by temporal frequency differences

    Directory of Open Access Journals (Sweden)

    David eAlais

    2012-07-01

    Full Text Available Binocular rivalry occurs when each eye views images that are markedly different. Rather than seeing a binocular fusion of the two, each image is seen exclusively in a stochastic alternation of the monocular images. Here we examine whether temporal frequency differences will trigger binocular rivalry by presenting two random dot arrays that are spatially matched but which modulate temporally at two different rates and contained no net translation. We found that a perceptual alternation between the two temporal frequencies did indeed occur, provided the frequencies were sufficiently different, indicating that temporal information can produce binocular rivalry in the absence of spatial conflict. This finding is discussed with regard to the dependence of rivalry on conflict between spatial and temporal channels.

  5. Measurement of suprathreshold binocular interactions in amblyopia.

    Science.gov (United States)

    Mansouri, B; Thompson, B; Hess, R F

    2008-12-01

    It has been established that in amblyopia, information from the amblyopic eye (AME) is not combined with that from the fellow fixing eye (FFE) under conditions of binocular viewing. However, recent evidence suggests that mechanisms that combine information between the eyes are intact in amblyopia. The lack of binocular function is most likely due to the imbalanced inputs from the two eyes under binocular conditions [Baker, D. H., Meese, T. S., Mansouri, B., & Hess, R. F. (2007b). Binocular summation of contrast remains intact in strabismic amblyopia. Investigative Ophthalmology & Visual Science, 48(11), 5332-5338]. We have measured the extent to which the information presented to each eye needs to differ for binocular combination to occur and in doing so we quantify the influence of interocular suppression. We quantify these suppressive effects for suprathreshold processing of global stimuli for both motion and spatial tasks. The results confirm the general importance of these suppressive effects in rendering the structurally binocular visual system of a strabismic amblyope, functionally monocular.

  6. Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms

    Directory of Open Access Journals (Sweden)

    Christos Ttofis

    2012-01-01

    Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.

  7. Uniocular and binocular fields of rotation measures: Octopus versus Goldmann.

    Science.gov (United States)

    Rowe, Fiona J; Hanif, Sahira

    2011-06-01

    To compare the range of ocular rotations measured by Octopus versus Goldmann perimetry. Forty subjects (20 controls and 20 patients with impaired ocular movements) were prospectively recruited, age range 21-83 years. Range of uniocular rotations was measured in six vectors corresponding to extraocular muscle actions: 0°, 67°, 141°, 180°, 216°, 293°. Fields of binocular single vision were assessed at 30° intervals. Vector measurements were utilised to calculate an area score for the field of uniocular rotations or binocular field of single vision. Two test speeds were used for Octopus testing: 3°/ and 10°/second. Test duration was two thirds quicker for Octopus 10°/second than for 3°/second stimulus speed, and slightly quicker for Goldmann. Mean area for control subjects for uniocular field was 7910.45 degrees(2) for Goldmann, 7032.14 for Octopus 3°/second and 7840.66 for Octopus 10°/second. Mean area for patient subjects of right uniocular field was 8567.21 degrees(2) for Goldmann, 5906.72 for Octopus 3°/second and 8806.44 for Octopus 10°/second. Mean area for left uniocular field was 8137.49 degrees(2) for Goldmann, 8127.9 for Octopus 3°/second and 8950.54 for Octopus 10°/second. Range of measured rotation was significantly larger for Octopus 10°/second speed. Our results suggest that the Octopus perimeter is an acceptable alternative method of assessment for uniocular ductions and binocular field of single vision. Speed of stimulus significantly alters test duration for Octopus perimetry. Comparisons of results from both perimeters show that quantitative measurements differ, although qualitatively the results are similar. Differences per mean vectors were less than 5° (within clinically accepted variances) for both controls and patients when comparing Goldmann to Octopus 10°/second speed. However, differences were almost 10° for the patient group when comparing Goldmann to Octopus 3°/second speed. Thus, speed of stimulus must be considered

  8. Interactive stereo electron microscopy enhanced with virtual reality

    International Nuclear Information System (INIS)

    Bethel, E.Wes; Bastacky, S.Jacob; Schwartz, Kenneth S.

    2001-01-01

    An analytical system is presented that is used to take measurements of objects perceived in stereo image pairs obtained from a scanning electron microscope (SEM). Our system operates by presenting a single stereo view that contains stereo image data obtained from the SEM, along with geometric representations of two types of virtual measurement instruments, a ''protractor'' and a ''caliper''. The measurements obtained from this system are an integral part of a medical study evaluating surfactant, a liquid coating the inner surface of the lung which makes possible the process of breathing. Measurements of the curvature and contact angle of submicron diameter droplets of a fluorocarbon deposited on the surface of airways are performed in order to determine surface tension of the air/liquid interface. This approach has been extended to a microscopic level from the techniques of traditional surface science by measuring submicrometer rather than millimeter diameter droplets, as well as the lengths and curvature of cilia responsible for movement of the surfactant, the airway's protective liquid blanket. An earlier implementation of this approach for taking angle measurements from objects perceived in stereo image pairs using a virtual protractor is extended in this paper to include distance measurements and to use a unified view model. The system is built around a unified view model that is derived from microscope-specific parameters, such as focal length, visible area and magnification. The unified view model ensures that the underlying view models and resultant binocular parallax cues are consistent between synthetic and acquired imagery. When the view models are consistent, it is possible to take measurements of features that are not constrained to lie within the projection plane. The system is first calibrated using non-clinical data of known size and resolution. Using the SEM, stereo image pairs of grids and spheres of known resolution are created to calibrate the

  9. Wide-Baseline Stereo-Based Obstacle Mapping for Unmanned Surface Vehicles

    Science.gov (United States)

    Mou, Xiaozheng; Wang, Han

    2018-01-01

    This paper proposes a wide-baseline stereo-based static obstacle mapping approach for unmanned surface vehicles (USVs). The proposed approach eliminates the complicated calibration work and the bulky rig in our previous binocular stereo system, and raises the ranging ability from 500 to 1000 m with a even larger baseline obtained from the motion of USVs. Integrating a monocular camera with GPS and compass information in this proposed system, the world locations of the detected static obstacles are reconstructed while the USV is traveling, and an obstacle map is then built. To achieve more accurate and robust performance, multiple pairs of frames are leveraged to synthesize the final reconstruction results in a weighting model. Experimental results based on our own dataset demonstrate the high efficiency of our system. To the best of our knowledge, we are the first to address the task of wide-baseline stereo-based obstacle mapping in a maritime environment. PMID:29617293

  10. A quantitative measurement of binocular color fusion limit for different disparities

    Science.gov (United States)

    Chen, Zaiqing; Shi, Junsheng; Tai, Yonghan; Huang, Xiaoqiao; Yun, Lijun; Zhang, Chao

    2018-01-01

    Color asymmetry is a common phenomenon in stereoscopic display system, which can cause visual fatigue or visual discomfort. When the color difference between the left and right eyes exceeds a threshold value, named binocular color fusion limit, color rivalry is said to occur. The most important information brought by stereoscopic displays is the depth perception produced by the disparity. As the stereo pair stimuli are presented separately to both eyes with disparities and those two monocular stimuli differ in color but share an iso-luminance polarity, it is possible for stereopsis and color rivalry to coexist. In this paper, we conducted an experiment to measure the color fusion limit for different disparity levels. In particular, it examines how the magnitude and sign of disparity affect the binocular color fusion limit that yields a fused, stable stereoscopic percept. The binocular color fusion limit was measured at five levels of disparities: 0, +/-60, +/-120 arc minutes for a sample color point which was selected from the 1976 CIE u'v' chromaticity diagram. The experimental results showed that fusion limit for the sample point varied with the level and sign of disparity. It was an interesting result that the fusion limit increased as the disparity decreases at crossed disparity direction (sign -), but there is almost no big change at uncrossed disparity direction (sign +). We found that color fusion was more difficult to achieve at the crossed disparity direction than at the uncrossed disparity direction.

  11. The STEREO Mission

    CERN Document Server

    2008-01-01

    The STEREO mission uses twin heliospheric orbiters to track solar disturbances from their initiation to 1 AU. This book documents the mission, its objectives, the spacecraft that execute it and the instruments that provide the measurements, both remote sensing and in situ. This mission promises to unlock many of the mysteries of how the Sun produces what has become to be known as space weather.

  12. The effect of Bangerter filters on binocular function in observers with amblyopia.

    Science.gov (United States)

    Chen, Zidong; Li, Jinrong; Thompson, Benjamin; Deng, Daming; Yuan, Junpeng; Chan, Lily; Hess, Robert F; Yu, Minbin

    2014-10-28

    We assessed whether partial occlusion of the nonamblyopic eye with Bangerter filters can immediately reduce suppression and promote binocular summation of contrast in observers with amblyopia. In Experiment 1, suppression was measured for 22 observers (mean age, 20 years; range, 14-32 years; 10 females) with strabismic or anisometropic amblyopia and 10 controls using our previously established "balance point" protocol. Measurements were made at baseline and with 0.6-, 0.4-, and 0.2-strength Bangerter filters placed over the nonamblyopic/dominant eye. In Experiment 2, psychophysical measurements of contrast sensitivity were made under binocular and monocular viewing conditions for 25 observers with anisometropic amblyopia (mean age, 17 years; range, 11-28 years; 14 females) and 22 controls (mean age, 24 years; range, 22-27; 12 female). Measurements were made at baseline, and with 0.4- and 0.2-strength Bangerter filters placed over the nonamblyopic/dominant eye. Binocular summation ratios (BSRs) were calculated at baseline and with Bangerter filters in place. Experiment 1: Bangerter filters reduced suppression in observers with amblyopia and induced suppression in controls (P = 0.025). The 0.2-strength filter eliminated suppression in observers with amblyopia and this was not a visual acuity effect. Experiment 2: Bangerter filters were able to induce normal levels of binocular contrast summation in the group of observers with anisometropic amblyopia for a stimulus with a spatial frequency of 3 cycles per degree (cpd, P = 0.006). The filters reduced binocular summation in controls. Bangerter filters can immediately reduce suppression and promote binocular summation for mid/low spatial frequencies in observers with amblyopia. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  13. Temporal Integration of Auditory Stimulation and Binocular Disparity Signals

    Directory of Open Access Journals (Sweden)

    Marina Zannoli

    2011-10-01

    Full Text Available Several studies using visual objects defined by luminance have reported that the auditory event must be presented 30 to 40 ms after the visual stimulus to perceive audiovisual synchrony. In the present study, we used visual objects defined only by their binocular disparity. We measured the optimal latency between visual and auditory stimuli for the perception of synchrony using a method introduced by Moutoussis & Zeki (1997. Visual stimuli were defined either by luminance and disparity or by disparity only. They moved either back and forth between 6 and 12 arcmin or from left to right at a constant disparity of 9 arcmin. This visual modulation was presented together with an amplitude-modulated 500 Hz tone. Both modulations were sinusoidal (frequency: 0.7 Hz. We found no difference between 2D and 3D motion for luminance stimuli: a 40 ms auditory lag was necessary for perceived synchrony. Surprisingly, even though stereopsis is often thought to be slow, we found a similar optimal latency in the disparity 3D motion condition (55 ms. However, when participants had to judge simultaneity for disparity 2D motion stimuli, it led to larger latencies (170 ms, suggesting that stereo motion detectors are poorly suited to track 2D motion.

  14. [Binocular fusion method for prevention of myopia].

    Science.gov (United States)

    Xu, G D

    1989-03-01

    When looking at a far object with two eyes, relaxation of convergence and accommodation occurred and accompanied by binocular fusion. Using this phenomenon a method of binocular fusion of targets was designed, that is the distance between two targets are just the same as the distance between two visual lines, while looking at a far object. During the images of the targets are fused, the accommodation and convergence are relaxed concomitantly; thus a result of correction of pseudomyopia and prevention of myopia is achieved. By means of binocular fusion, the eye muscle exercises were conducted and resulted in not only the far point further but also the near point closer. The skiascopic examination carried out at the same time of binocular fusion showed that the degrees of relaxed accommodation was 97.9% that of looking at an object in far distance. The above results indicated that the binocular fusion method had excellent effect on the prevention of myopia. This method is simple and feasible, conforms to the visual physiology, and thus can be widely adopted.

  15. Importância da visão binocular no desempenho da leitura em crianças

    OpenAIRE

    Lança, Carla Costa

    2012-01-01

    Objectivos do estudo: 1) identificar os movimentos oculares envolvidos na leitura e (2) descrever a influência das anomalias da visão binocular no desempenho da leitura em crianças. Metodologia: estudo descritivo baseado numa revisão de literatura. Foi efetuada uma pesquisa de referências publicadas até 2011, acessíveis através da PubMed, da Science Direct e de outras fontes adicionais. Os seguintes termos foram utilizados na pesquisa: binocular vision AND reading; ocular mo...

  16. Visual and binocular status in elementary school children with a reading problem.

    Science.gov (United States)

    Christian, Lisa W; Nandakumar, Krithika; Hrynchak, Patricia K; Irving, Elizabeth L

    2017-11-21

    This descriptive study provides a summary of the binocular anomalies seen in elementary school children identified with reading problems. A retrospective chart review of all children identified with reading problems and seen by the University of Waterloo, Optometry Clinic, from September 2012 to June 2013. Files of 121 children (mean age 8.6 years, range 6-14 years) were reviewed. No significant refractive error was found in 81% of children. Five and 8 children were identified as strabismic at distance and near respectively. Phoria test revealed 90% and 65% of patients had normal distance and near phoria. Near point of convergencia (NPC) was <5cm in 68% of children, and 77% had stereoacuity of ≤40seconds of arc. More than 50% of the children had normal fusional vergence ranges except for near positive fusional vergencce (base out) break (46%). Tests for accommodation showed 91% of children were normal for binocular facility, and approximately 70% of children had an expected accuracy of accommodation. Findings indicate that some children with an identified reading problem also present with abnormal binocular test results compared to published normal values. Further investigation should be performed to investigate the relationship between binocular vision function and reading performance. Crown Copyright © 2017. Published by Elsevier España, S.L.U. All rights reserved.

  17. Binocular contrast-gain control for natural scenes: Image structure and phase alignment.

    Science.gov (United States)

    Huang, Pi-Chun; Dai, Yu-Ming

    2018-05-01

    In the context of natural scenes, we applied the pattern-masking paradigm to investigate how image structure and phase alignment affect contrast-gain control in binocular vision. We measured the discrimination thresholds of bandpass-filtered natural-scene images (targets) under various types of pedestals. Our first experiment had four pedestal types: bandpass-filtered pedestals, unfiltered pedestals, notch-filtered pedestals (which enabled removal of the spatial frequency), and misaligned pedestals (which involved rotation of unfiltered pedestals). Our second experiment featured six types of pedestals: bandpass-filtered, unfiltered, and notch-filtered pedestals, and the corresponding phase-scrambled pedestals. The thresholds were compared for monocular, binocular, and dichoptic viewing configurations. The bandpass-filtered pedestal and unfiltered pedestals showed classic dipper shapes; the dipper shapes of the notch-filtered, misaligned, and phase-scrambled pedestals were weak. We adopted a two-stage binocular contrast-gain control model to describe our results. We deduced that the phase-alignment information influenced the contrast-gain control mechanism before the binocular summation stage and that the phase-alignment information and structural misalignment information caused relatively strong divisive inhibition in the monocular and interocular suppression stages. When the pedestals were phase-scrambled, the elimination of the interocular suppression processing was the most convincing explanation of the results. Thus, our results indicated that both phase-alignment information and similar image structures cause strong interocular suppression. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. The effect of image position on the Independent Components of natural binocular images.

    Science.gov (United States)

    Hunter, David W; Hibbard, Paul B

    2018-01-11

    Human visual performance degrades substantially as the angular distance from the fovea increases. This decrease in performance is found for both binocular and monocular vision. Although analysis of the statistics of natural images has provided significant insights into human visual processing, little research has focused on the statistical content of binocular images at eccentric angles. We applied Independent Component Analysis to rectangular image patches cut from locations within binocular images corresponding to different degrees of eccentricity. The distribution of components learned from the varying locations was examined to determine how these distributions varied across eccentricity. We found a general trend towards a broader spread of horizontal and vertical position disparity tunings in eccentric regions compared to the fovea, with the horizontal spread more pronounced than the vertical spread. Eccentric locations above the centroid show a strong bias towards far-tuned components, eccentric locations below the centroid show a strong bias towards near-tuned components. These distributions exhibit substantial similarities with physiological measurements in V1, however in common with previous research we also observe important differences, in particular distributions of binocular phase disparity which do not match physiology.

  19. Hearing symptoms personal stereos.

    Science.gov (United States)

    da Luz, Tiara Santos; Borja, Ana Lúcia Vieira de Freitas

    2012-04-01

     Practical and portable the personal stereos if had become almost indispensable accessories in the day the day. Studies disclose that the portable players of music can cause auditory damages in the long run for who hear music in high volume for a drawn out time.  to verify the prevalence of auditory symptoms in users of amplified players and to know its habits of use  Observational prospective study of transversal cut carried through in three institutions of education of the city of Salvador BA, being two of public net and one of the private net. 400 students had answered to the questionnaire, of both the sex, between 14 and 30 years that had related the habit to use personal stereos.  The symptoms most prevalent had been hyperacusis (43.5%), auricular fullness (30.5%) and humming (27.5), being that the humming is the symptom most present in the population youngest. How much to the daily habits: 62.3% frequent use, 57% in raised intensities, 34% in drawn out periods. An inverse relation between exposition time was verified and the band of age (p = 0,000) and direct with the prevalence of the humming.  Although to admit to have knowledge on the damages that the exposition the sound of high intensity can cause the hearing, the daily habits of the young evidence the inadequate use of the portable stereos characterized by long periods of exposition, raised intensities, frequent use and preference for the insertion phones. The high prevalence of symptoms after the use suggests a bigger risk for the hearing of these young.

  20. Hearing symptoms personal stereos

    Directory of Open Access Journals (Sweden)

    Tiara Santos da Luz1

    2012-01-01

    Full Text Available Introduction: Practical and portable the personal stereos if had become almost indispensable accessories in the day the day. Studies disclose that the portable players of music can cause auditory damages in the long run for who hear music in high volume for a drawn out time. Objective: to verify the prevalence of auditory symptoms in users of amplified players and to know its habits of use. Method: Observational prospective study of transversal cut carried through in three institutions of education of the city of Salvador BA, being two of public net and one of the private net. 400 students had answered to the questionnaire, of both the sex, between 14 and 30 years that had related the habit to use personal stereos. Results: The symptoms most prevalent had been hyperacusis (43.5%, auricular fullness (30.5% and humming (27.5, being that the humming is the symptom most present in the population youngest. How much to the daily habits: 62.3% frequent use, 57% in raised intensities, 34% in drawn out periods. An inverse relation between exposition time was verified and the band of age (p=0,000 and direct with the prevalence of the humming. Conclusion: Although to admit to have knowledge on the damages that the exposition the sound of high intensity can cause the hearing, the daily habits of the young evidence the inadequate use of the portable stereos characterized by long periods of exposition, raised intensities, frequent use and preference for the insertion phones. The high prevalence of symptoms after the use suggests a bigger risk for the hearing of these young.

  1. Stereo-panoramic Data

    KAUST Repository

    Cutchin, Steve

    2013-01-01

    Systems and methods for automatically generating three-dimensional panoramic images for use in various virtual reality settings are disclosed. One embodiment of the system includes a stereo camera capture device (SCD), a programmable camera controller (PCC) that rotates, orients, and controls the SCD, a robotic maneuvering platform (RMP), and a path and adaptation controller (PAC). In that embodiment, the PAC determines the movement of the system based on an original desired path and input gathered from the SCD during an image capture process.

  2. Stereo-panoramic Data

    KAUST Repository

    Cutchin, Steve

    2013-03-07

    Systems and methods for automatically generating three-dimensional panoramic images for use in various virtual reality settings are disclosed. One embodiment of the system includes a stereo camera capture device (SCD), a programmable camera controller (PCC) that rotates, orients, and controls the SCD, a robotic maneuvering platform (RMP), and a path and adaptation controller (PAC). In that embodiment, the PAC determines the movement of the system based on an original desired path and input gathered from the SCD during an image capture process.

  3. Binocular iPad Game vs Patching for Treatment of Amblyopia in Children: A Randomized Clinical Trial.

    Science.gov (United States)

    Kelly, Krista R; Jost, Reed M; Dao, Lori; Beauchamp, Cynthia L; Leffler, Joel N; Birch, Eileen E

    2016-12-01

    Fellow eye patching has long been the standard treatment for amblyopia, but it does not always restore 20/20 vision or teach the eyes to work together. Amblyopia can be treated with binocular games that rebalance contrast between the eyes so that a child may overcome suppression. However, it is unclear whether binocular treatment is comparable to patching in treating amblyopia. To assess the effectiveness of a binocular iPad (Apple Inc) adventure game as amblyopia treatment and compare this binocular treatment with patching, the current standard of care. This investigation was a randomized clinical trial with a crossover design at a nonprofit eye research institute. Between February 20, 2015, and January 4, 2016, a total of 28 patients were enrolled in the study, with 14 randomized to binocular game treatment and 14 to patching treatment. Binocular game and patching as amblyopia treatments. The primary outcome was change in amblyopic eye best-corrected visual acuity (BCVA) at the 2-week visit. Secondary outcomes were change in stereoacuity and suppression at the 2-week visit and change in BCVA at the 4-week visit. Among 28 children, the mean (SD) age at baseline was 6.7 (1.4) years (age range, 4.6-9.5 years), and 7 (25%) were female. At baseline, the mean (SD) amblyopic eye BCVA was 0.48 (0.14) logMAR (approximately 20/63; range, 0.3-0.8 logMAR [20/40 to 20/125]), with 14 children randomized to the binocular game and 14 to patching for 2 weeks. At the 2-week visit, improvement in amblyopic eye BCVA was greater with the binocular game compared with patching, with a mean (SD) improvement of 0.15 (0.08) logMAR (mean [SD], 1.5 [0.8] lines) vs 0.07 (0.08) logMAR (mean [SD], 0.7 [0.8] line; P = .02) after 2 weeks of treatment. These improvements from baseline were significant for the binocular game (mean [SD] improvement, 1.5 [0.8] lines; P suppression improved from baseline at the 2-week visit for the binocular game (mean [SD], 4.82 [2.82] vs 3.24 [2.87]; P

  4. Stereo Hysteresis Revisited

    Directory of Open Access Journals (Sweden)

    Christopher Tyler

    2012-05-01

    Full Text Available One of the most fascinating phenomena in stereopsis is the profound hysteresis effect reported by Fender and Julesz (1967, in which the depth percept persisted with increasing disparity long past the point at which depth was recovered with decreasing disparity. To control retinal disparity without vergence eye movements, they stabilized the stimuli on the retinas with an eye tracker. I now report that stereo hysteresis can be observed directly in printed stereograms simply by rotating the image. As the stereo image rotates, the horizontal disparities rotate to become vertical, then horizontal with inverted sign, and then vertical again before returning to the original orientation. The depth shows an interesting popout effect, almost as though the depth was turning on and off rapidly, despite the inherently sinusoidal change in the horizontal disparity vector. This stimulus was generated electronically in a circular format so that the random-dot field could be dynamically replaced, eliminating any cue to cyclorotation. Noise density was scaled with eccentricity to fade out the stimulus near fixation. For both the invariant and the dynamic noise, profound hysteresis of several seconds delay was found in six observers. This was far longer than the reaction time to respond to changes in disparity, which was less than a second. Purely horizontal modulation of disparity to match the horizontal vector component of the disparity rotation did not show the popout effect, which thus seems to be a function of the interaction between horizontal and vertical disparities and may be attributable to depth interpolation processes.

  5. Person and gesture tracking with smart stereo cameras

    Science.gov (United States)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems

  6. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  7. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    Directory of Open Access Journals (Sweden)

    Liang Lu

    2018-03-01

    Full Text Available Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.

  8. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    Science.gov (United States)

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-01-01

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703

  9. Range and variability in gesture-based interactions with medical images : do non-stereo versus stereo visualizations elicit different types of gestures?

    NARCIS (Netherlands)

    Beurden, van M.H.P.H.; IJsselsteijn, W.A.

    2010-01-01

    The current paper presents a study into the range and variability of natural gestures when interacting with medical images, using traditional non stereo and stereoscopic modes of presentation. The results have implications for the design of computer-vision algorithms developed to support natural

  10. Photometric stereo endoscopy.

    Science.gov (United States)

    Parot, Vicente; Lim, Daryl; González, Germán; Traverso, Giovanni; Nishioka, Norman S; Vakoc, Benjamin J; Durr, Nicholas J

    2013-07-01

    While color video endoscopy has enabled wide-field examination of the gastrointestinal tract, it often misses or incorrectly classifies lesions. Many of these missed lesions exhibit characteristic three-dimensional surface topographies. An endoscopic system that adds topographical measurements to conventional color imagery could therefore increase lesion detection and improve classification accuracy. We introduce photometric stereo endoscopy (PSE), a technique which allows high spatial frequency components of surface topography to be acquired simultaneously with conventional two-dimensional color imagery. We implement this technique in an endoscopic form factor and demonstrate that it can acquire the topography of small features with complex geometries and heterogeneous optical properties. PSE imaging of ex vivo human gastrointestinal tissue shows that surface topography measurements enable differentiation of abnormal shapes from surrounding normal tissue. Together, these results confirm that the topographical measurements can be obtained with relatively simple hardware in an endoscopic form factor, and suggest the potential of PSE to improve lesion detection and classification in gastrointestinal imaging.

  11. Development of radiation hardened robot for nuclear facility - Development of real-time stereo object tracking system using the optical correlator

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eun Soo; Lee, S. H.; Lee, J. S. [Kwangwoon University, Seoul (Korea)

    2000-03-01

    Object tracking, through Centroide method used in the KAERI-M1 Stereo Robot Vision System developed at Atomic Research Center, is too sensitive to target's light variation and because it has a fragility which can't reflect the surrounding background, the application in the actual condition is very limited. Also the correlation method can constitute a relatively stable object tracker in noise features but the digital calculation amount is too massive in image correlation so real time materialization is limited. So the development of Optical Correlation based on Stereo Object Tracking System using high speed optical information processing technique will put stable the real time stereo object tracking system and substantial atomic industrial stereo robot vision system to practical use. This research is about developing real time stereo object tracking algorithm using optical correlation system through the technique which can be applied to Atomic Research Center's KAERI-M1 Stereo Vision Robot which will be used in atomic facility remote operations. And revise the stereo disparity using real time optical correlation technique, and materializing the application of the stereo object tracking algorithm to KAERI-M1 Stereo Robot. 19 refs., 45 figs., 2 tabs. (Author)

  12. Binocular treatment of amblyopia using videogames (BRAVO): study protocol for a randomised controlled trial.

    Science.gov (United States)

    Guo, Cindy X; Babu, Raiju J; Black, Joanna M; Bobier, William R; Lam, Carly S Y; Dai, Shuan; Gao, Tina Y; Hess, Robert F; Jenkins, Michelle; Jiang, Yannan; Kowal, Lionel; Parag, Varsha; South, Jayshree; Staffieri, Sandra Elfride; Walker, Natalie; Wadham, Angela; Thompson, Benjamin

    2016-10-18

    Amblyopia is a common neurodevelopmental disorder of vision that is characterised by visual impairment in one eye and compromised binocular visual function. Existing evidence-based treatments for children include patching the nonamblyopic eye to encourage use of the amblyopic eye. Currently there are no widely accepted treatments available for adults with amblyopia. The aim of this trial is to assess the efficacy of a new binocular, videogame-based treatment for amblyopia in older children and adults. We hypothesise that binocular treatment will significantly improve amblyopic eye visual acuity relative to placebo treatment. The BRAVO study is a double-blind, randomised, placebo-controlled multicentre trial to assess the effectiveness of a novel videogame-based binocular treatment for amblyopia. One hundred and eight participants aged 7 years or older with anisometropic and/or strabismic amblyopia (defined as ≥0.2 LogMAR interocular visual acuity difference, ≥0.3 LogMAR amblyopic eye visual acuity and no ocular disease) will be recruited via ophthalmologists, optometrists, clinical record searches and public advertisements at five sites in New Zealand, Canada, Hong Kong and Australia. Eligible participants will be randomised by computer in a 1:1 ratio, with stratification by age group: 7-12, 13-17 and 18 years and older. Participants will be randomised to receive 6 weeks of active or placebo home-based binocular treatment. Treatment will be in the form of a modified interactive falling-blocks game, implemented on a 5th generation iPod touch device viewed through red/green anaglyphic glasses. Participants and those assessing outcomes will be blinded to group assignment. The primary outcome is the change in best-corrected distance visual acuity in the amblyopic eye from baseline to 6 weeks post randomisation. Secondary outcomes include distance and near visual acuity, stereopsis, interocular suppression, angle of strabismus (where applicable) measured at

  13. What is stereoscopic vision good for?

    Science.gov (United States)

    Read, Jenny C. A.

    2015-03-01

    Stereo vision is a resource-intensive process. Nevertheless, it has evolved in many animals including mammals, birds, amphibians and insects. It must therefore convey significant fitness benefits. It is often assumed that the main benefit is improved accuracy of depth judgments, but camouflage breaking may be as important, particularly in predatory animals. In humans, for the last 150 years, stereo vision has been turned to a new use: helping us reproduce visual reality for artistic purposes. By recreating the different views of a scene seen by the two eyes, stereo achieves unprecedented levels of realism. However, it also has some unexpected effects on viewer experience. The disruption of established mechanisms for interpreting pictures may be one reason why some viewers find stereoscopic content disturbing. Stereo vision also has uses in ophthalmology. Clinical stereoacuity tests are used in the management of conditions such as strabismus and amblyopia as well as vision screening. Stereoacuity can reveal the effectiveness of therapy and even predict long-term outcomes post surgery. Yet current clinical stereo tests fall far short of the accuracy and precision achievable in the lab. At Newcastle University, we are exploiting the recent availability of autostereo 3D tablet computers to design a clinical stereotest app in the form of a game suitable for young children. Our goal is to enable quick, accurate and precise stereoacuity measures which will enable clinicians to obtain better outcomes for children with visual disorders.

  14. STEREO-IMPACT Education and Public Outreach: Sharing STEREO Science

    Science.gov (United States)

    Craig, N.; Peticolas, L. M.; Mendez, B. J.

    2005-12-01

    The Solar TErrestrial RElations Observatory (STEREO) is scheduled for launch in Spring 2006. STEREO will study the Sun with two spacecrafts in orbit around it and on either side of Earth. The primary science goal is to understand the nature and consequences of Coronal Mass Ejections (CMEs). Despite their importance, scientists don't fully understand the origin and evolution of CMEs, nor their structure or extent in interplanetary space. STEREO's unique 3-D images of the structure of CMEs will enable scientists to determine their fundamental nature and origin. We will discuss the Education and Public Outreach (E/PO) program for the In-situ Measurement of Particles And CME Transients (IMPACT) suite of instruments aboard the two crafts and give examples of upcoming activities, including NASA's Sun-Earth day events, which are scheduled to coincide with a total solar eclipse in March. This event offers a good opportunity to engage the public in STEREO science, because an eclipse allows one to see the solar corona from where CMEs erupt. STEREO's connection to space weather lends itself to close partnerships with the Sun-Earth Connection Education Forum (SECEF), The Exploratorium, and UC Berkeley's Center for New Music and Audio Technologies to develop informal science programs for science centers, museum visitors, and the public in general. We will also discuss our teacher workshops locally in California and also at annual conferences such as those of the National Science Teachers Association. Such workshops often focus on magnetism and its connection to CMEs and Earth's magnetic field, leading to the questions STEREO scientists hope to answer. The importance of partnerships and coordination in working in an instrument E/PO program that is part of a bigger NASA mission with many instrument suites and many PIs will be emphasized. The Education and Outreach Porgram is funded by NASA's SMD.

  15. Binocular depth processing in the ventral visual pathway.

    Science.gov (United States)

    Verhoef, Bram-Ernst; Vogels, Rufin; Janssen, Peter

    2016-06-19

    One of the most powerful forms of depth perception capitalizes on the small relative displacements, or binocular disparities, in the images projected onto each eye. The brain employs these disparities to facilitate various computations, including sensori-motor transformations (reaching, grasping), scene segmentation and object recognition. In accordance with these different functions, disparity activates a large number of regions in the brain of both humans and monkeys. Here, we review how disparity processing evolves along different regions of the ventral visual pathway of macaques, emphasizing research based on both correlational and causal techniques. We will discuss the progression in the ventral pathway from a basic absolute disparity representation to a more complex three-dimensional shape code. We will show that, in the course of this evolution, the underlying neuronal activity becomes progressively more bound to the global perceptual experience. We argue that these observations most probably extend beyond disparity processing per se, and pertain to object processing in the ventral pathway in general. We conclude by posing some important unresolved questions whose answers may significantly advance the field, and broaden its scope.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Author(s).

  16. Synchronization of binocular motion parameters optoelectronic measurement system

    Science.gov (United States)

    Zhang, Lingfei; Ye, Dong; Che, Rensheng; Chen, Gang

    2008-10-01

    The synchronization between high-speed digital cameras and computers is very important for the binocular vision system based on light-weighted passive IR reflective markers and IR LED array PCB board, which is often used to measure the 3-D motion parameters of a rocket motor. In order to solve this problem, a comparison on the existing approaches to camera synchronization in the published literature was conducted. The advantages and disadvantages of the currently used methods were illustrated and their suitable applications were discussed. A new method, which uses self-made hardware resetting camera and software triggering image acquisition board, is provided. The self-made hardware is used to send TTL signal to two image acquisition boards one time per second. The TTL signal is used to reset two cameras and two image acquisition boards as PRIN signal, and then two image acquisition boards send same EXSYNC signal to two cameras. In this way, two cameras can be synchronized to exposure and capture images in the mean time. The test results indicated that the new approach designed in this paper can meet the demand of image acquisition at a speed of 200f/s, whose synchronization accuracy is up to micro second.

  17. More superimposition for contrast-modulated than luminance-modulated stimuli during binocular rivalry.

    Science.gov (United States)

    Skerswetat, Jan; Formankiewicz, Monika A; Waugh, Sarah J

    2018-01-01

    Luminance-modulated noise (LM) and contrast-modulated noise (CM) gratings were presented with interocularly correlated, uncorrelated and anti-correlated binary noise to investigate their contributions to mixed percepts, specifically piecemeal and superimposition, during binocular rivalry. Stimuli were sine-wave gratings of 2 c/deg presented within 2 deg circular apertures. The LM stimulus contrast was 0.1 and the CM stimulus modulation depth was 1.0, equating to approximately 5 and 7 times detection threshold, respectively. Twelve 45 s trials, per noise configuration, were carried out. Fifteen participants with normal vision indicated via button presses whether an exclusive, piecemeal or superimposed percept was seen. For all noise conditions LM stimuli generated more exclusive visibility, and lower proportions of superimposition. CM stimuli led to greater proportions and longer periods of superimposition. For both stimulus types, correlated interocular noise generated more superimposition than did anti- or uncorrelated interocular noise. No significant effect of stimulus type (LM vs CM) or noise configuration (correlated, uncorrelated, anti-correlated) on piecemeal perception was found. Exclusive visibility was greater in proportion, and perceptual changes more numerous, during binocular rivalry for CM stimuli when interocular noise was not correlated. This suggests that mutual inhibition, initiated by non-correlated noise CM gratings, occurs between neurons processing luminance noise (first-order component), as well as those processing gratings (second-order component). Therefore, first- and second-order components can contribute to overall binocular rivalry responses. We suggest the addition of a new well to the current energy landscape model for binocular rivalry that takes superimposition into account. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Stereo Disparity through Cost Aggregation with Guided Filter

    Directory of Open Access Journals (Sweden)

    Pauline Tan

    2014-10-01

    Full Text Available Estimating the depth, or equivalently the disparity, of a stereo scene is a challenging problem in computer vision. The method proposed by Rhemann et al. in 2011 is based on a filtering of the cost volume, which gives for each pixel and for each hypothesized disparity a cost derived from pixel-by-pixel comparison. The filtering is performed by the guided filter proposed by He et al. in 2010. It computes a weighted local average of the costs. The weights are such that similar pixels tend to have similar costs. Eventually, a winner-take-all strategy selects the disparity with the minimal cost for each pixel. Non-consistent labels according to left-right consistency are rejected; a densification step can then be launched to fill the disparity map. The method can be used to solve other labeling problems (optical flow, segmentation but this article focuses on the stereo matching problem.

  19. Computer vision as an alternative for collision detection

    OpenAIRE

    Drangsholt, Marius Aarvik

    2015-01-01

    The goal of this thesis was to implement a computer vision system on a low power platform, to see if that could be an alternative for a collision detection system. To achieve this, research into fundamentals in computer vision were performed, and both hardware and software implementation were carried out. To create the computer vision system, a stereo rig were constructed using low cost Logitech webcameras, and connected to a Raspberry Pi 2 development board. The computer vision library Op...

  20. Hearing damage by personal stereo

    DEFF Research Database (Denmark)

    Hammershøi, Dorte; Ordoñez, Rodrigo Pizarro; Reuter, Karen

    2006-01-01

    The technological development within personal stereo systems, such as MP3 players, iPods etc., has changed music listening habits from home entertainment to everyday and everywhere use. The technology has developed considerably, since the introduction of CD walkmen, and high-level low-distortion ......The technological development within personal stereo systems, such as MP3 players, iPods etc., has changed music listening habits from home entertainment to everyday and everywhere use. The technology has developed considerably, since the introduction of CD walkmen, and high-level low......-distortion music is produced by minimal devices. In this paper, the existing literature on effects of personal stereo systems is reviewed, incl. studies of exposure levels, and effects on hearing. Generally, it is found that the levels being used is of concern, which in one study [Acustica?Acta Acustica, 82 (1996......) 885?894] is demonstrated to relate to the specific use in situations with high levels of background noise. Another study [Med. J. Austr., 1998; 169: 588-592], demonstrates that the effect of personal stereo is comparable to that of being exposed to noise in industry. The results are discussed in view...

  1. How Simultaneous is the Perception of Binocular Depth and Rivalry in Plaid Stimuli?

    Directory of Open Access Journals (Sweden)

    Athena Buckthought

    2012-06-01

    Full Text Available Psychophysical experiments have demonstrated that it is possible to perceive both binocular depth and rivalry in plaids (Buckthought and Wilson 2007, Vision Research 47 2543–2556. In a recent study, we investigated the neural substrates for depth and rivalry processing with these plaid patterns, when either a depth or rivalry task was performed (Buckthought and Mendola 2011, Journal of Vision 11 1–15. However, the extent to which perception of the two stimulus aspects was truly simultaneous remained somewhat unclear. In the present study, we introduced a new task in which subjects were instructed to perform both depth and rivalry tasks concurrently. Subjects were clearly able to perform both tasks at the same time, but with a modest, symmetric drop in performance when compared to either task carried out alone. Subjects were also able to raise performance levels for either task by performing it with a higher priority, with a decline in performance for the other task. The symmetric declines in performance are consistent with the interpretation that the two tasks are equally demanding of attention (Braun and Julesz 1998, Perception & Psychophysics 60 1–23. The results demonstrate the impressive combination of binocular features that supports coincident depth and rivalry in surface perception, within the constraints of presumed orientation and spatial frequency channels.

  2. Optimization on shape curves with application to specular stereo

    KAUST Repository

    Balzer, Jonathan

    2010-01-01

    We state that a one-dimensional manifold of shapes in 3-space can be modeled by a level set function. Finding a minimizer of an independent functional among all points on such a shape curve has interesting applications in computer vision. It is shown how to replace the commonly encountered practice of gradient projection by a projection onto the curve itself. The outcome is an algorithm for constrained optimization, which, as we demonstrate theoretically and numerically, provides some important benefits in stereo reconstruction of specular surfaces. © 2010 Springer-Verlag.

  3. Vision-based mapping with cooperative robots

    Science.gov (United States)

    Little, James J.; Jennings, Cullen; Murray, Don

    1998-10-01

    Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.

  4. Assessing Attention Deficit by Binocular Rivalry.

    Science.gov (United States)

    Amador-Campos, Juan Antonio; Aznar-Casanova, J Antonio; Ortiz-Guerra, Juan Jairo; Moreno-Sánchez, Manuel; Medina-Peña, Antonio

    2015-12-01

    To determine whether the frequency and duration of the periods of suppression of a percept in a binocular rivalry (BR) task can be used to distinguish between participants with ADHD and controls. A total of 122 participants (6-15 years) were assigned to three groups: ADHD-Combined (ADHD-C), ADHD-Predominantly Inattentive (ADHD-I), and controls. They each performed a BR task and two measures were recorded: alternation rate and duration of exclusive dominance periods. ADHD-C group presented fewer alternations and showed greater variability than did the control group; results for the ADHD-I group being intermediate between the two. The duration of dominance periods showed a differential profile: In control group, it remained stable over time, whereas in the clinical groups, it decreased logarithmically as the task progressed. The differences between groups in relation to the BR indicators can be attributed to the activity of involuntary inhibition. © The Author(s) 2013.

  5. Prevalence of remediable disability due to low vision among institutionalised elderly people.

    NARCIS (Netherlands)

    Winter, L.J. de; Hoyng, C.B.; Froeling, P.G.A.M.; Meulendijks, C.F.M.; Wilt, G.J. van der

    2004-01-01

    BACKGROUND: Prevalence of remediable visual disability among institutionalised elderly people, resulting from inappropriate use or non-use of low-vision aids, is reported to be high, but largely rests on anecdotal evidence. OBJECTIVE: To estimate the prevalence of binocular low vision and underlying

  6. Advances in embedded computer vision

    CERN Document Server

    Kisacanin, Branislav

    2014-01-01

    This illuminating collection offers a fresh look at the very latest advances in the field of embedded computer vision. Emerging areas covered by this comprehensive text/reference include the embedded realization of 3D vision technologies for a variety of applications, such as stereo cameras on mobile devices. Recent trends towards the development of small unmanned aerial vehicles (UAVs) with embedded image and video processing algorithms are also examined. The authoritative insights range from historical perspectives to future developments, reviewing embedded implementation, tools, technolog

  7. Matching Cost Filtering for Dense Stereo Correspondence

    Directory of Open Access Journals (Sweden)

    Yimin Lin

    2013-01-01

    Full Text Available Dense stereo correspondence enabling reconstruction of depth information in a scene is of great importance in the field of computer vision. Recently, some local solutions based on matching cost filtering with an edge-preserving filter have been proved to be capable of achieving more accuracy than global approaches. Unfortunately, the computational complexity of these algorithms is quadratically related to the window size used to aggregate the matching costs. The recent trend has been to pursue higher accuracy with greater efficiency in execution. Therefore, this paper proposes a new cost-aggregation module to compute the matching responses for all the image pixels at a set of sampling points generated by a hierarchical clustering algorithm. The complexity of this implementation is linear both in the number of image pixels and the number of clusters. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art local methods in terms of both accuracy and speed. Moreover, performance tests indicate that parameters such as the height of the hierarchical binary tree and the spatial and range standard deviations have a significant influence on time consumption and the accuracy of disparity maps.

  8. Low-cost, portable, robust and high-resolution single-camera stereo-DIC system and its application in high-temperature deformation measurements

    Science.gov (United States)

    Chi, Yuxi; Yu, Liping; Pan, Bing

    2018-05-01

    A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.

  9. Stereo reconstruction from multiperspective panoramas.

    Science.gov (United States)

    Li, Yin; Shum, Heung-Yeung; Tang, Chi-Keung; Szeliski, Richard

    2004-01-01

    A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular perspective images to produce a set of multiperspective panoramas and then compute depth maps directly from these resampled panoramas. Our panoramas sample uniformly in three dimensions: rotation angle, inverse radial distance, and vertical elevation. The use of multiperspective panoramas eliminates the limited overlap present in the original input images and, thus, problems as in conventional multibaseline stereo can be avoided. Our approach differs from stereo matching of single-perspective panoramic images taken from different locations, where the epipolar constraints are sine curves. For our multiperspective panoramas, the epipolar geometry, to the first order approximation, consists of horizontal lines. Therefore, any traditional stereo algorithm can be applied to multiperspective panoramas with little modification. In this paper, we describe two reconstruction algorithms. The first is a cylinder sweep algorithm that uses a small number of resampled multiperspective panoramas to obtain dense 3D reconstruction. The second algorithm, in contrast, uses a large number of multiperspective panoramas and takes advantage of the approximate horizontal epipolar geometry inherent in multiperspective panoramas. It comprises a novel and efficient 1D multibaseline matching technique, followed by tensor voting to extract the depth surface. Experiments show that our algorithms are capable of producing comparable high quality depth maps which can be used for applications such as view interpolation.

  10. A buyer's and user's guide to astronomical telescopes & binoculars

    CERN Document Server

    Mullaney, James

    2007-01-01

    This exciting, upbeat new guide provides an extensive overview of binoculars and telescopes. It includes detailed up-to-date information on sources, selection and use of virtually every major type, brand and model of such instruments on today's market.

  11. Binocular Rivalry in a Competitive Neural Network with Synaptic Depression

    KAUST Repository

    Kilpatrick, Zachary P.; Bressloff, Paul C.

    2010-01-01

    We study binocular rivalry in a competitive neural network with synaptic depression. In particular, we consider two coupled hypercolums within primary visual cortex (V1), representing orientation selective cells responding to either left or right

  12. Objective Evaluation of Visual Fatigue Using Binocular Fusion Maintenance.

    Science.gov (United States)

    Hirota, Masakazu; Morimoto, Takeshi; Kanda, Hiroyuki; Endo, Takao; Miyoshi, Tomomitsu; Miyagawa, Suguru; Hirohara, Yoko; Yamaguchi, Tatsuo; Saika, Makoto; Fujikado, Takashi

    2018-03-01

    In this study, we investigated whether an individual's visual fatigue can be evaluated objectively and quantitatively from their ability to maintain binocular fusion. Binocular fusion maintenance (BFM) was measured using a custom-made binocular open-view Shack-Hartmann wavefront aberrometer equipped with liquid crystal shutters, wherein eye movements and wavefront aberrations were measured simultaneously. Transmittance in the liquid crystal shutter in front of the subject's nondominant eye was reduced linearly, and BFM was determined from the transmittance at the point when binocular fusion was broken and vergence eye movement was induced. In total, 40 healthy subjects underwent the BFM test and completed a questionnaire regarding subjective symptoms before and after a visual task lasting 30 minutes. BFM was significantly reduced after the visual task ( P eye symptom score (adjusted R 2 = 0.752, P devices, such as head-mount display, objectively.

  13. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  14. Stereo chromatic contrast sensitivity model to blue-yellow gratings.

    Science.gov (United States)

    Yang, Jiachen; Lin, Yancong; Liu, Yun

    2016-03-07

    As a fundamental metric of human visual system (HVS), contrast sensitivity function (CSF) is typically measured by sinusoidal gratings at the detection of thresholds for psychophysically defined cardinal channels: luminance, red-green, and blue-yellow. Chromatic CSF, which is a quick and valid index to measure human visual performance and various retinal diseases in two-dimensional (2D) space, can not be directly applied into the measurement of human stereo visual performance. And no existing perception model considers the influence of chromatic CSF of inclined planes on depth perception in three-dimensional (3D) space. The main aim of this research is to extend traditional chromatic contrast sensitivity characteristics to 3D space and build a model applicable in 3D space, for example, strengthening stereo quality of 3D images. This research also attempts to build a vision model or method to check human visual characteristics of stereo blindness. In this paper, CRT screen was clockwise and anti-clockwise rotated respectively to form the inclined planes. Four inclined planes were selected to investigate human chromatic vision in 3D space and contrast threshold of each inclined plane was measured with 18 observers. Stimuli were isoluminant blue-yellow sinusoidal gratings. Horizontal spatial frequencies ranged from 0.05 to 5 c/d. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. According to the relationship between spatial frequency of inclined plane and horizontal spatial frequency, the chromatic contrast sensitivity characteristics in 3D space have been modeled based on the experimental data. The results show that the proposed model can well predicted human chromatic contrast sensitivity characteristics in 3D space.

  15. Wide baseline stereo matching based on double topological relationship consistency

    Science.gov (United States)

    Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang

    2009-07-01

    Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.

  16. Stereo using monocular cues within the tensor voting framework.

    Science.gov (United States)

    Mordohai, Philippos; Medioni, Gérard

    2006-06-01

    We address the fundamental problem of matching in two static images. The remaining challenges are related to occlusion and lack of texture. Our approach addresses these difficulties within a perceptual organization framework, considering both binocular and monocular cues. Initially, matching candidates for all pixels are generated by a combination of matching techniques. The matching candidates are then embedded in disparity space, where perceptual organization takes place in 3D neighborhoods and, thus, does not suffer from problems associated with scanline or image neighborhoods. The assumption is that correct matches produce salient, coherent surfaces, while wrong ones do not. Matching candidates that are consistent with the surfaces are kept and grouped into smooth layers. Thus, we achieve surface segmentation based on geometric and not photometric properties. Surface overextensions, which are due to occlusion, can be corrected by removing matches whose projections are not consistent in color with their neighbors of the same surface in both images. Finally, the projections of the refined surfaces on both images are used to obtain disparity hypotheses for unmatched pixels. The final disparities are selected after a second tensor voting stage, during which information is propagated from more reliable pixels to less reliable ones. We present results on widely used benchmark stereo pairs.

  17. MRI and Stereo Vision Surface Reconstruction and Fusion

    OpenAIRE

    El Chemaly, Trishia; Siepel, Françoise Jeanette; Rihana, Sandy; Groenhuis, Vincent; van der Heijden, Ferdinand; Stramigioli, Stefano

    2017-01-01

    Breast cancer, the most commonly diagnosed cancer in women worldwide, is mostly detected through a biopsy where tissue is extracted and chemically examined or pathologist assessed. Medical imaging plays a valuable role in targeting malignant tissue accurately and guiding the radiologist during needle insertion in a biopsy. This paper proposes a computer software that can process and combine 3D reconstructed surfaces from different imaging modalities, particularly Magnetic Resonance Imaging (M...

  18. MRI and Stereo Vision Surface Reconstruction and Fusion

    NARCIS (Netherlands)

    El Chemaly, Trishia; Siepel, Françoise Jeanette; Rihana, Sandy; Groenhuis, Vincent; van der Heijden, Ferdinand; Stramigioli, Stefano

    2017-01-01

    Breast cancer, the most commonly diagnosed cancer in women worldwide, is mostly detected through a biopsy where tissue is extracted and chemically examined or pathologist assessed. Medical imaging plays a valuable role in targeting malignant tissue accurately and guiding the radiologist during

  19. Stereo Vision and 3D Reconstruction on a Processor Network

    NARCIS (Netherlands)

    Paar, G.; Kuijpers, N.H.L.; Gasser, C.

    1996-01-01

    Surface measurements during outdòoor construction processes ar very costly whenever the measurement process interferes with the construction activities, since machine and man power resources are idle during the data acquisition procedure. Using frame cameras as sensors to provide a rneasurement data

  20. Hardware Approach for Real Time Machine Stereo Vision

    Directory of Open Access Journals (Sweden)

    Michael Tornow

    2006-02-01

    Full Text Available Image processing is an effective tool for the analysis of optical sensor information for driver assistance systems and controlling of autonomous robots. Algorithms for image processing are often very complex and costly in terms of computation. In robotics and driver assistance systems, real-time processing is necessary. Signal processing algorithms must often be drastically modified so they can be implemented in the hardware. This task is especially difficult for continuous real-time processing at high speeds. This article describes a hardware-software co-design for a multi-object position sensor based on a stereophotogrammetric measuring method. In order to cover a large measuring area, an optimized algorithm based on an image pyramid is implemented in an FPGA as a parallel hardware solution for depth map calculation. Object recognition and tracking are then executed in real-time in a processor with help of software. For this task a statistical cluster method is used. Stabilization of the tracking is realized through use of a Kalman filter. Keywords: stereophotogrammetry, hardware-software co-design, FPGA, 3-d image analysis, real-time, clustering and tracking.

  1. The iPod binocular home-based treatment for amblyopia in adults: efficacy and compliance.

    Science.gov (United States)

    Hess, Robert F; Babu, Raiju Jacob; Clavagnier, Simon; Black, Joanna; Bobier, William; Thompson, Benjamin

    2014-09-01

    Occlusion therapy for amblyopia is predicated on the idea that amblyopia is primarily a disorder of monocular vision; however, there is growing evidence that patients with amblyopia have a structurally intact binocular visual system that is rendered functionally monocular due to suppression. Furthermore, we have found that a dichoptic treatment intervention designed to directly target suppression can result in clinically significant improvement in both binocular and monocular visual function in adult patients with amblyopia. The fact that monocular improvement occurs in the absence of any fellow eye occlusion suggests that amblyopia is, in part, due to chronic suppression. Previously the treatment has been administered as a psychophysical task and more recently as a video game that can be played on video goggles or an iPod device equipped with a lenticular screen. The aim of this case-series study of 14 amblyopes (six strabismics, six anisometropes and two mixed) ages 13 to 50 years was to investigate: 1. whether the portable video game treatment is suitable for at-home use and 2. whether an anaglyphic version of the iPod-based video game, which is more convenient for at-home use, has comparable effects to the lenticular version. The dichoptic video game treatment was conducted at home and visual functions assessed before and after treatment. We found that at-home use for 10 to 30 hours restored simultaneous binocular perception in 13 of 14 cases along with significant improvements in acuity (0.11 ± 0.08 logMAR) and stereopsis (0.6 ± 0.5 log units). Furthermore, the anaglyph and lenticular platforms were equally effective. In addition, the iPod devices were able to record a complete and accurate picture of treatment compliance. The home-based dichoptic iPod approach represents a viable treatment for adults with amblyopia. © 2014 The Authors. Clinical and Experimental Optometry © 2014 Optometrists Association Australia.

  2. Low Vision

    Science.gov (United States)

    ... USAJobs Home » Statistics and Data » Low Vision Listen Low Vision Low Vision Defined: Low Vision is defined as the best- ... Ethnicity 2010 U.S. Age-Specific Prevalence Rates for Low Vision by Age, and Race/Ethnicity Table for 2010 ...

  3. Reward modulates perception in binocular rivalry.

    Science.gov (United States)

    Marx, Svenja; Einhäuser, Wolfgang

    2015-01-14

    Our perception does not provide us with an exact imprint of the outside world, but is continuously adapted to our internal expectations, task sets, and behavioral goals. Although effects of reward-or value in general-on perception therefore seem likely, how valuation modulates perception and how such modulation relates to attention is largely unknown. We probed effects of reward on perception by using a binocular-rivalry paradigm. Distinct gratings drifting in opposite directions were presented to each observer's eyes. To objectify their subjective perceptual experience, the optokinetic nystagmus was used as measure of current perceptual dominance. In a first experiment, one of the percepts was either rewarded or attended. We found that reward and attention similarly biased perception. In a second experiment, observers performed an attentionally demanding task either on the rewarded stimulus, the other stimulus, or both. We found that-on top of an attentional effect on perception-at each level of attentional load, reward still modulated perception by increasing the dominance of the rewarded percept. Similarly, penalizing one percept increased dominance of the other at each level of attentional load. In turn, rewarding-and similarly nonpunishing-a percept yielded performance benefits that are typically associated with selective attention. In conclusion, our data show that value modulates perception in a similar way as the volitional deployment of attention, even though the relative effect of value is largely unaffected by an attention task. © 2015 ARVO.

  4. Improvements in clinical and functional vision and perceived visual disability after first and second eye cataract surgery

    OpenAIRE

    Elliott, D.; Patla, A.; Bullimore, M.

    1997-01-01

    AIMS—To determine the improvements in clinical and functional vision and perceived visual disability after first and second eye cataract surgery.
METHODS—Clinical vision (monocular and binocular high and low contrast visual acuity, contrast sensitivity, and disability glare), functional vision (face identity and expression recognition, reading speed, word acuity, and mobility orientation), and perceived visual disability (Activities of Daily Vision Scale) were measured in 25 subjects before a...

  5. Stereo and Solar Cycle 24

    Science.gov (United States)

    Kaise,r Michael L.

    2008-01-01

    The twin STEREO spacecrafi, launched in October 2006, are in heliocentric orbits near 4 AU with one spacecraft (Ahead) leading Earth in its orbit around the Sun and the other (Behind) trailing Earth. As viewed from the Sun, the STEREO spacecraft are continually separating from one another at about 45 degrees per year with Earth biseding the angle. At present, th@spaser=raft are a bit more than 45 degrees apart, thus they are able to each 'vie@ ground the limb's of the Sun by about 23 degrees, corresponding to about 1.75 days of solar rotation. Both spameraft contain an identical set of instruments including an extreme ultraviolet imager, two white light coronagraphs, tws all-sky imagers, a wide selection of energetic particle detectors, a magnetometer and a radio burst tracker. A snapshot of the real time data is continually broadcast to NOW-managed ground stations and this small stream of data is immediately sent to the STEREO Science Center and converted into useful space weather data within 5 minutes of ground receipt. The resulting images, particle, magnetometer and radio astronomy plots are available at j g i t , : gAs timqe conting ues ijnto . g solar cycle 24, the separation angle becomes 90 degrees in early 2009 and 180 degrees in early 201 1 as the activity heads toward maximum. By the time of solar maximum, STEREO will provide for the first time a view of the entire Sun with the mronagraphs and e*reme ultraviolet instruments. This view wilt allow us to follow the evolution of active regions continuously and also detect new active regions long before they pose a space weather threat to Earth. The in situ instruments will be able to provide about 7 days advanced notice of co-rotating structures in the solar wind. During this same intewal near solar maximum, the wide-angle imagers on STEREB will both be ;able to view EarlCP-dirsted CMEs in their plane-oPsky. When combined with Eat-lhorbiting assets available at that time, it seems solar cycle 24 will mark a

  6. Vision Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Vision Lab personnel perform research, development, testing and evaluation of eye protection and vision performance. The lab maintains and continues to develop...

  7. Stereo visualization in the ground segment tasks of the science space missions

    Science.gov (United States)

    Korneva, Natalia; Nazarov, Vladimir; Mogilevsky, Mikhail; Nazirov, Ravil

    The ground segment is one of the key components of any science space mission. Its functionality substantially defines the scientific effectiveness of the experiment as a whole. And it should be noted that its outstanding feature (in contrast to the other information systems of the scientific space projects) is interaction between researcher and project information system in order to interpret data being obtained during experiments. Therefore the ability to visualize the data being processed is essential prerequisite for ground segment's software and the usage of modern technological solutions and approaches in this area will allow increasing science return in general and providing a framework for new experiments creation. Mostly for the visualization of data being processed 2D and 3D graphics are used that is caused by the traditional visualization tools capabilities. Besides that the stereo data visualization methods are used actively in solving some tasks. However their usage is usually limited to such tasks as visualization of virtual and augmented reality, remote sensing data processing and suchlike. Low prevalence of stereo visualization methods in solving science ground segment tasks is primarily explained by extremely high cost of the necessary hardware. But recently appeared low cost hardware solutions for stereo visualization based on the page-flip method of views separation. In this case it seems promising to use the stereo visualization as an instrument for investigation of a wide range of problems, mainly for stereo visualization of complex physical processes as well as mathematical abstractions and models. The article is concerned with an attempt to use this approach. It describes the details and problems of using stereo visualization (page-flip method based on NVIDIA 3D Vision Kit, graphic processor GeForce) for display of some datasets of magnetospheric satellite onboard measurements and also in development of the software for manual stereo matching.

  8. WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves

    Science.gov (United States)

    Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise

    2017-10-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.

  9. Stereo Correspondence Using Moment Invariants

    Science.gov (United States)

    Premaratne, Prashan; Safaei, Farzad

    Autonomous navigation is seen as a vital tool in harnessing the enormous potential of Unmanned Aerial Vehicles (UAV) and small robotic vehicles for both military and civilian use. Even though, laser based scanning solutions for Simultaneous Location And Mapping (SLAM) is considered as the most reliable for depth estimation, they are not feasible for use in UAV and land-based small vehicles due to their physical size and weight. Stereovision is considered as the best approach for any autonomous navigation solution as stereo rigs are considered to be lightweight and inexpensive. However, stereoscopy which estimates the depth information through pairs of stereo images can still be computationally expensive and unreliable. This is mainly due to some of the algorithms used in successful stereovision solutions require high computational requirements that cannot be met by small robotic vehicles. In our research, we implement a feature-based stereovision solution using moment invariants as a metric to find corresponding regions in image pairs that will reduce the computational complexity and improve the accuracy of the disparity measures that will be significant for the use in UAVs and in small robotic vehicles.

  10. WASS: an open-source stereo processing pipeline for sea waves 3D reconstruction

    Science.gov (United States)

    Bergamasco, Filippo; Benetazzo, Alvise; Torsello, Andrea; Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro

    2017-04-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community. In fact, recent advances of both computer vision algorithms and CPU processing power can now allow the study of the spatio-temporal wave fields with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner so that the implementation of a 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well-tested software package that automates the steps from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS, a completely Open-Source stereo processing pipeline for sea waves 3D reconstruction, available at http://www.dais.unive.it/wass/. Our tool completely automates the recovery of dense point clouds from stereo images by providing three main functionalities. First, WASS can automatically recover the extrinsic parameters of the stereo rig (up to scale) so that no delicate calibration has to be performed on the field. Second, WASS implements a fast 3D dense stereo reconstruction procedure so that an accurate 3D point cloud can be computed from each stereo pair. We rely on the well-consolidated OpenCV library both for the image stereo rectification and disparity map recovery. Lastly, a set of 2D and 3D filtering techniques both on the disparity map and the produced point cloud are implemented to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface (examples are sun-glares, large white-capped areas, fog and water areosol, etc). Developed to be as fast as possible, WASS

  11. An Active System for Visually-Guided Reaching in 3D across Binocular Fixations

    Directory of Open Access Journals (Sweden)

    Ester Martinez-Martin

    2014-01-01

    Full Text Available Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity generated from the egocentric representation of the visual information (image coordinates. In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching. The approach’s performance is evaluated through experiments on both simulated and real data.

  12. Improved Stereo Matching With Boosting Method

    Directory of Open Access Journals (Sweden)

    Shiny B

    2015-06-01

    Full Text Available Abstract This paper presents an approach based on classification for improving the accuracy of stereo matching methods. We propose this method for occlusion handling. This work employs classification of pixels for finding the erroneous disparity values. Due to the wide applications of disparity map in 3D television medical imaging etc the accuracy of disparity map has high significance. An initial disparity map is obtained using local or global stereo matching methods from the input stereo image pair. The various features for classification are computed from the input stereo image pair and the obtained disparity map. Then the computed feature vector is used for classification of pixels by using GentleBoost as the classification method. The erroneous disparity values in the disparity map found by classification are corrected through a completion stage or filling stage. A performance evaluation of stereo matching using AdaBoostM1 RUSBoost Neural networks and GentleBoost is performed.

  13. BRDF invariant stereo using light transport constancy.

    Science.gov (United States)

    Wang, Liang; Yang, Ruigang; Davis, James E

    2007-09-01

    Nearly all existing methods for stereo reconstruction assume that scene reflectance is Lambertian and make use of brightness constancy as a matching invariant. We introduce a new invariant for stereo reconstruction called light transport constancy (LTC), which allows completely arbitrary scene reflectance (bidirectional reflectance distribution functions (BRDFs)). This invariant can be used to formulate a rank constraint on multiview stereo matching when the scene is observed by several lighting configurations in which only the lighting intensity varies. In addition, we show that this multiview constraint can be used with as few as two cameras and two lighting configurations. Unlike previous methods for BRDF invariant stereo, LTC does not require precisely configured or calibrated light sources or calibration objects in the scene. Importantly, the new constraint can be used to provide BRDF invariance to any existing stereo method whenever appropriate lighting variation is available.

  14. Validity of the Worth 4 Dot Test in Patients with Red-Green Color Vision Defect.

    Science.gov (United States)

    Bak, Eunoo; Yang, Hee Kyung; Hwang, Jeong-Min

    2017-05-01

    The Worth four dot test uses red and green glasses for binocular dissociation, and although it has been believed that patients with red-green color vision defects cannot accurately perform the Worth four dot test, this has not been validated. Therefore, the purpose of this study was to demonstrate the validity of the Worth four dot test in patients with congenital red-green color vision defects who have normal or abnormal binocular vision. A retrospective review of medical records was performed on 30 consecutive congenital red-green color vision defect patients who underwent the Worth four dot test. The type of color vision anomaly was determined by the Hardy Rand and Rittler (HRR) pseudoisochromatic plate test, Ishihara color test, anomaloscope, and/or the 100 hue test. All patients underwent a complete ophthalmologic examination. Binocular sensory status was evaluated with the Worth four dot test and Randot stereotest. The results were interpreted according to the presence of strabismus or amblyopia. Among the 30 patients, 24 had normal visual acuity without strabismus nor amblyopia and 6 patients had strabismus and/or amblyopia. The 24 patients without strabismus nor amblyopia all showed binocular fusional responses by seeing four dots of the Worth four dot test. Meanwhile, the six patients with strabismus or amblyopia showed various results of fusion, suppression, and diplopia. Congenital red-green color vision defect patients of different types and variable degree of binocularity could successfully perform the Worth four dot test. They showed reliable results that were in accordance with their estimated binocular sensory status.

  15. Binocular function in patients with pseudophakic monovision.

    Science.gov (United States)

    Ito, Misae; Shimizu, Kimiya; Niida, Takahiro; Amano, Rie; Ishikawa, Hitoshi

    2014-08-01

    To evaluate the relationship between ocular deviation and stereopsis and fusion in patients who had pseudophakic monovision surgery. Department of Ophthalmology, Kitasato University Hospital, Kanagawa, Japan. Retrospective comparative case series. Patients had surgical monovision correction with monofocal intraocular lens placement followed by routine postoperative examinations. The alternate prism cover test was used to measure motor alignment. Sensory tests for binocularity included sensory fusion determinations using the Worth 4-dot test, near stereopsis test, and fusion amplitude measured with a prism bar. Patients with monovision were categorized as having small-angle exophoria (≤10.0 prism diopters [Δ]) or moderate-angle exophoria (>10.0 Δ). This study comprised 60 patients with a mean age of 70.2 years ± 7.7 (SD). The difference in the mean stereopsis values between patients with small-angle exophoria and patients with moderate-angle exophoria was statistically significant (P<.001). In the moderate-angle exophoria group, 10 patients (62.5%) developed intermittent exotropia after surgery; however, no serious ocular deviation problems were observed. The fusion amplitudes in patients with pseudophakic monovision were approximately similar to normal values. Patients with moderate-angle exophoria were more likely to fail the Worth 4-dot test than those with small-angle exophoria. In patients with pseudophakic monovision having a near exophoria angle of more than 10.0 Δ, the possibility of changes in ocular deviation and stereopsis after surgery is a concern. Moreover, the application of monovision in patients with a previous moderate-angle exophoria should be carefully considered. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  16. Three-dimensional stereo by photometric ratios

    International Nuclear Information System (INIS)

    Wolff, L.B.; Angelopoulou, E.

    1994-01-01

    We present a methodology for corresponding a dense set of points on an object surface from photometric values for three-dimensional stereo computation of depth. The methodology utilizes multiple stereo pairs of images, with each stereo pair being taken of the identical scene but under different illumination. With just two stereo pairs of images taken under two different illumination conditions, a stereo pair of ratio images can be produced, one for the ratio of left-hand images and one for the ratio of right-hand images. We demonstrate how the photometric ratios composing these images can be used for accurate correspondence of object points. Object points having the same photometric ratio with respect to two different illumination conditions constitute a well-defined equivalence class of physical constraints defined by local surface orientation relative to illumination conditions. We formally show that for diffuse reflection the photometric ratio is invariant to varying camera characteristics, surface albedo, and viewpoint and that therefore the same photometric ratio in both images of a stereo pair implies the same equivalence class of physical constraints. The correspondence of photometric ratios along epipolar lines in a stereo pair of images under different illumination conditions is a correspondence of equivalent physical constraints, and the determination of depth from stereo can be performed. Whereas illumination planning is required, our photometric-based stereo methodology does not require knowledge of illumination conditions in the actual computation of three-dimensional depth and is applicable to perspective views. This technique extends the stereo determination of three-dimensional depth to smooth featureless surfaces without the use of precisely calibrated lighting. We demonstrate experimental depth maps from a dense set of points on smooth objects of known ground-truth shape, determined to within 1% depth accuracy

  17. A McCollough Effect Generated at Binocular Site

    Directory of Open Access Journals (Sweden)

    Qiujie Weng

    2011-05-01

    Full Text Available Following exposures to alternating gratings with unique combination of orientation and colors, an achromatic grating would appear tinted with its perceived color contingent on the grating's orientation. This orientation-contingent color after effect is called the McCollough effect. The lack of interocular transfer of the McCollough effect suggests that the McCollough effect is primarily established in monocular channels. Here we explored the possibility that the McCollough effect can be induced at a binocular site. During adaptation, a red vertical grating and a green horizontal grating are dichoptically presented to the two eyes. In the ‘binocular rivalry’ condition, these two gratings were constantly presented throughout the adaptation duration and subjects experienced the rivalry between the two gratings. In the ‘physical alternation’ condition, the two dichoptic gratings physically alternated during adaptation, perceptually similar to binocular rivalry. Interestingly, following dichoptic adaptation either in the rivalry condition or in the physical alternation condition, a binocularly viewed achromatic test grating appeared colored depending on its orientation: a vertical grating appeared greenish and a horizontal grating pinkish. In other words, we observed a McCollough effect following dichoptic adaptation, which can only be explained by a binocular site of orientation-contingent color adaptation.

  18. Binocular iPad treatment for amblyopia in preschool children.

    Science.gov (United States)

    Birch, Eileen E; Li, Simone L; Jost, Reed M; Morale, Sarah E; De La Cruz, Angie; Stager, David; Dao, Lori; Stager, David R

    2015-02-01

    Recent experimental evidence supports a role for binocular visual experience in the treatment of amblyopia. The purpose of this study was to determine whether repeated binocular visual experience with dichoptic iPad games could effectively treat amblyopia in preschool children. A total of 50 consecutive amblyopic preschool children 3-6.9 years of age were assigned to play sham iPad games (first 5 children) or binocular iPad games (n = 45) for at least 4 hours per week for 4 weeks. Thirty (67%) children in the binocular iPad group and 4 (80%) in the sham iPad group were also treated with patching at a different time of day. Visual acuity and stereoacuity were assessed at baseline, at 4 weeks, and at 3 months after the cessation of game play. The sham iPad group had no significant improvement in visual acuity (t4 = 0.34, P = 0.75). In the binocular iPad group, mean visual acuity (plus or minus standard error) improved from 0.43 ± 0.03 at baseline to 0.34 ± 0.03 logMAR at 4 weeks (n = 45; paired t44 = 4.93; P iPad games for ≥8 hours (≥50% compliance) had significantly more visual acuity improvement than children who played 0-4 hours (t43 = 4.21, P = 0.0001). Repeated binocular experience, provided by dichoptic iPad game play, was more effective than sham iPad game play as a treatment for amblyopia in preschool children. Copyright © 2015 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.

  19. Stereo Pinhole Camera: Assembly and experimental activities

    Directory of Open Access Journals (Sweden)

    Gilmário Barbosa Santos

    2015-05-01

    Full Text Available This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Furthermore, experiments are proposed by using the images obtained by the camera for 3D visualization through a pair of anaglyph glasses, and the estimation of relative depth by triangulation is discussed.

  20. A comparison of static near stereo acuity in youth baseball/softball players and non-ball players.

    Science.gov (United States)

    Boden, Lauren M; Rosengren, Kenneth J; Martin, Daniel F; Boden, Scott D

    2009-03-01

    Although many aspects of vision have been investigated in professional baseball players, few studies have been performed in developing athletes. The issue of whether youth baseball players have superior stereopsis to nonplayers has not been addressed specifically. The purpose of this study was to determine if youth baseball/softball players have better stereo acuity than non-ball players. Informed consent was obtained from 51 baseball/softball players and 52 non-ball players (ages 10 to 18 years). Subjects completed a questionnaire, and their static near stereo acuity was measured using the Randot Stereotest (Stereo Optical Company, Chicago, Illinois). Stereo acuity was measured as the seconds of arc between the last pair of images correctly distinguished by the subject. The mean stereo acuity score was 25.5 +/- 1.7 seconds of arc in the baseball/softball players and 56.2 +/- 8.4 seconds of arc in the non-ball players. This difference was statistically significant (P softball players had significantly better static stereo acuity than non-ball players, comparable to professional ball players.

  1. GPGPU Implementation of a Genetic Algorithm for Stereo Refinement

    Directory of Open Access Journals (Sweden)

    Álvaro Arranz

    2015-03-01

    Full Text Available During the last decade, the general-purpose computing on graphics processing units Graphics (GPGPU has turned out to be a useful tool for speeding up many scientific calculations. Computer vision is known to be one of the fields with more penetration of these new techniques. This paper explores the advantages of using GPGPU implementation to speedup a genetic algorithm used for stereo refinement. The main contribution of this paper is analyzing which genetic operators take advantage of a parallel approach and the description of an efficient state- of-the-art implementation for each one. As a result, speed-ups close to x80 can be achieved, demonstrating to be the only way of achieving close to real-time performance.

  2. Stereo 3D spatial phase diagrams

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Jinwu, E-mail: kangjw@tsinghua.edu.cn; Liu, Baicheng, E-mail: liubc@tsinghua.edu.cn

    2016-07-15

    Phase diagrams serve as the fundamental guidance in materials science and engineering. Binary P-T-X (pressure–temperature–composition) and multi-component phase diagrams are of complex spatial geometry, which brings difficulty for understanding. The authors constructed 3D stereo binary P-T-X, typical ternary and some quaternary phase diagrams. A phase diagram construction algorithm based on the calculated phase reaction data in PandaT was developed. And the 3D stereo phase diagram of Al-Cu-Mg ternary system is presented. These phase diagrams can be illustrated by wireframe, surface, solid or their mixture, isotherms and isopleths can be generated. All of these can be displayed by the three typical display ways: electronic shutter, polarization and anaglyph (for example red-cyan glasses). Especially, they can be printed out with 3D stereo effect on paper, and watched by the aid of anaglyph glasses, which makes 3D stereo book of phase diagrams come to reality. Compared with the traditional illustration way, the front of phase diagrams protrude from the screen and the back stretches far behind of the screen under 3D stereo display, the spatial structure can be clearly and immediately perceived. These 3D stereo phase diagrams are useful in teaching and research. - Highlights: • Stereo 3D phase diagram database was constructed, including binary P-T-X, ternary, some quaternary and real ternary systems. • The phase diagrams can be watched by active shutter or polarized or anaglyph glasses. • The print phase diagrams retains 3D stereo effect which can be achieved by the aid of anaglyph glasses.

  3. VISION development

    International Nuclear Information System (INIS)

    Hernandez, J.E.; Sherwood, R.J.; Whitman, S.R.

    1994-01-01

    VISION is a flexible and extensible object-oriented programming environment for prototyping computer-vision and pattern-recognition algorithms. This year's effort focused on three major areas: documentation, graphics, and support for new applications

  4. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  5. STEREO interplanetary shocks and foreshocks

    International Nuclear Information System (INIS)

    Blanco-Cano, X.; Kajdič, P.; Aguilar-Rodríguez, E.; Russell, C. T.; Jian, L. K.; Luhmann, J. G.

    2013-01-01

    We use STEREO data to study shocks driven by stream interactions and the waves associated with them. During the years of the extended solar minimum 2007-2010, stream interaction shocks have Mach numbers between 1.1-3.8 and θ Bn ∼20-86°. We find a variety of waves, including whistlers and low frequency fluctuations. Upstream whistler waves may be generated at the shock and upstream ultra low frequency (ULF) waves can be driven locally by ion instabilities. The downstream wave spectra can be formed by both, locally generated perturbations, and shock transmitted waves. We find that many quasiperpendicular shocks can be accompanied by ULF wave and ion foreshocks, which is in contrast to Earth's bow shock. Fluctuations downstream of quasi-parallel shocks tend to have larger amplitudes than waves downstream of quasi-perpendicular shocks. Proton foreshocks of shocks driven by stream interactions have extensions dr ≤0.05 AU. This is smaller than foreshock extensions for ICME driven shocks. The difference in foreshock extensions is related to the fact that ICME driven shocks are formed closer to the Sun and therefore begin to accelerate particles very early in their existence, while stream interaction shocks form at ∼1 AU and have been producing suprathermal particles for a shorter time.

  6. STEREO interplanetary shocks and foreshocks

    Energy Technology Data Exchange (ETDEWEB)

    Blanco-Cano, X. [Instituto de Geofisica, UNAM, CU, Coyoacan 04510 DF (Mexico); Kajdic, P. [IRAP-University of Toulouse, CNRS, Toulouse (France); Aguilar-Rodriguez, E. [Instituto de Geofisica, UNAM, Morelia (Mexico); Russell, C. T. [ESS and IGPP, University of California, Los Angeles, 603 Charles Young Drive, Los Angeles, CA 90095 (United States); Jian, L. K. [NASA Goddard Space Flight Center, Greenbelt, MD and University of Maryland, College Park, MD (United States); Luhmann, J. G. [SSL, University of California Berkeley (United States)

    2013-06-13

    We use STEREO data to study shocks driven by stream interactions and the waves associated with them. During the years of the extended solar minimum 2007-2010, stream interaction shocks have Mach numbers between 1.1-3.8 and {theta}{sub Bn}{approx}20-86 Degree-Sign . We find a variety of waves, including whistlers and low frequency fluctuations. Upstream whistler waves may be generated at the shock and upstream ultra low frequency (ULF) waves can be driven locally by ion instabilities. The downstream wave spectra can be formed by both, locally generated perturbations, and shock transmitted waves. We find that many quasiperpendicular shocks can be accompanied by ULF wave and ion foreshocks, which is in contrast to Earth's bow shock. Fluctuations downstream of quasi-parallel shocks tend to have larger amplitudes than waves downstream of quasi-perpendicular shocks. Proton foreshocks of shocks driven by stream interactions have extensions dr {<=}0.05 AU. This is smaller than foreshock extensions for ICME driven shocks. The difference in foreshock extensions is related to the fact that ICME driven shocks are formed closer to the Sun and therefore begin to accelerate particles very early in their existence, while stream interaction shocks form at {approx}1 AU and have been producing suprathermal particles for a shorter time.

  7. Automated Vision Test Development and Validation

    Science.gov (United States)

    2016-11-01

    crystal display monitor (NEC Multisync, P232W) at 1920x1080 resolution. Proper calibration was confirmed using a spot photometer/colorimeter (X-Rite i1...visual input to the right and left eye was achieved using liquid crystal display shuttered glasses (NVIDIA 3D Vision 2). The stereo target (Figure 4...threshold on the automated tasks. • Subjects had a lower (better) threshold on color testing for all cone types using the OCCT due to a ceiling

  8. 3D vision upgrade kit for TALON robot

    Science.gov (United States)

    Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad

    2010-04-01

    In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  9. Monocular perceptual learning of contrast detection facilitates binocular combination in adults with anisometropic amblyopia.

    Science.gov (United States)

    Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin

    2016-02-01

    Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n = 13) were asked to complete two psychophysical supra-threshold binocular summation tasks: (1) binocular phase combination and (2) dichoptic global motion coherence before and after monocular training to investigate this question. We showed that these participants benefited from monocular training in terms of binocular combination. More importantly, the improvements observed with the area under log CSF (AULCSF) were found to be correlated with the improvements in binocular phase combination.

  10. Satellite markers: a simple method for ground truth car pose on stereo video

    Science.gov (United States)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-04-01

    Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.

  11. Vision in Children and Adolescents with Autistic Spectrum Disorder: Evidence for Reduced Convergence

    Science.gov (United States)

    Milne, Elizabeth; Griffiths, Helen; Buckley, David; Scope, Alison

    2009-01-01

    Evidence of atypical perception in individuals with ASD is mainly based on self report, parental questionnaires or psychophysical/cognitive paradigms. There have been relatively few attempts to establish whether binocular vision is enhanced, intact or abnormal in those with ASD. To address this, we screened visual function in 51 individuals with…

  12. The Relationship Between Fusion, Suppression, and Diplopia in Normal and Amblyopic Vision.

    Science.gov (United States)

    Spiegel, Daniel P; Baldwin, Alex S; Hess, Robert F

    2016-10-01

    Single vision occurs through a combination of fusion and suppression. When neither mechanism takes place, we experience diplopia. Under normal viewing conditions, the perceptual state depends on the spatial scale and interocular disparity. The purpose of this study was to examine the three perceptual states in human participants with normal and amblyopic vision. Participants viewed two dichoptically separated horizontal blurred edges with an opposite tilt (2.35°) and indicated their binocular percept: "one flat edge," "one tilted edge," or "two edges." The edges varied with scale (fine 4 min arc and coarse 32 min arc), disparity, and interocular contrast. We investigated how the binocular interactions vary in amblyopic (visual acuity [VA] > 0.2 logMAR, n = 4) and normal vision (VA ≤ 0 logMAR, n = 4) under interocular variations in stimulus contrast and luminance. In amblyopia, despite the established sensory dominance of the fellow eye, fusion prevails at the coarse scale and small disparities (75%). We also show that increasing the relative contrast to the amblyopic eye enhances the probability of fusion at the fine scale (from 18% to 38%), and leads to a reversal of the sensory dominance at coarse scale. In normal vision we found that interocular luminance imbalances disturbed binocular combination only at the fine scale in a way similar to that seen in amblyopia. Our results build upon the growing evidence that the amblyopic visual system is binocular and further show that the suppressive mechanisms rendering the amblyopic system functionally monocular are scale dependent.

  13. Robot vision for nuclear advanced robot

    International Nuclear Information System (INIS)

    Nakayama, Ryoichi; Okano, Hideharu; Kuno, Yoshinori; Miyazawa, Tatsuo; Shimada, Hideo; Okada, Satoshi; Kawamura, Astuo

    1991-01-01

    This paper describes Robot Vision and Operation System for Nuclear Advanced Robot. This Robot Vision consists of robot position detection, obstacle detection and object recognition. With these vision techniques, a mobile robot can make a path and move autonomously along the planned path. The authors implemented the above robot vision system on the 'Advanced Robot for Nuclear Power Plant' and tested in an environment mocked up as nuclear power plant facilities. Since the operation system for this robot consists of operator's console and a large stereo monitor, this system can be easily operated by one person. Experimental tests were made using the Advanced Robot (nuclear robot). Results indicate that the proposed operation system is very useful, and can be operate by only person. (author)

  14. Specifying colours for colour vision testing using computer graphics.

    Science.gov (United States)

    Toufeeq, A

    2004-10-01

    This paper describes a novel test of colour vision using a standard personal computer, which is simple and reliable to perform. Twenty healthy individuals with normal colour vision and 10 healthy individuals with a red/green colour defect were tested binocularly at 13 selected points in the CIE (Commission International d'Eclairage, 1931) chromaticity triangle, representing the gamut of a computer monitor, where the x, y coordinates of the primary colour phosphors were known. The mean results from individuals with normal colour vision were compared to those with defective colour vision. Of the 13 points tested, five demonstrated consistently high sensitivity in detecting colour defects. The test may provide a convenient method for classifying colour vision abnormalities.

  15. Composition of a Vision Screen for Servicemembers With Traumatic Brain Injury: Consensus Using a Modified Nominal Group Technique

    Science.gov (United States)

    Finkelstein, Marsha; Llanos, Imelda; Scheiman, Mitchell; Wagener, Sharon Gowdy

    2014-01-01

    Vision impairment is common in the first year after traumatic brain injury (TBI), including among service members whose brain injuries occurred during deployment in Iraq and Afghanistan. Occupational therapy practitioners provide routine vision screening to inform treatment planning and referral to vision specialists, but existing methods are lacking because many tests were developed for children and do not screen for vision dysfunction typical of TBI. An expert panel was charged with specifying the composition of a vision screening protocol for servicemembers with TBI. A modified nominal group technique fostered discussion and objective determinations of consensus. After considering 29 vision tests, the panel recommended a nine-test vision screening that examines functional performance, self-reported problems, far–near acuity, reading, accommodation, convergence, eye alignment and binocular vision, saccades, pursuits, and visual fields. Research is needed to develop reliable, valid, and clinically feasible vision screening protocols to identify TBI-related vision disorders in adults. PMID:25005505

  16. 3D panorama stereo visual perception centering on the observers

    International Nuclear Information System (INIS)

    Tang, YiPing; Zhou, Jingkai; Xu, Haitao; Xiang, Yun

    2015-01-01

    For existing three-dimensional (3D) laser scanners, acquiring geometry and color information of the objects simultaneously is difficult. Moreover, the current techniques cannot store, modify, and model the point clouds efficiently. In this work, we have developed a novel sensor system, which is called active stereo omni-directional vision sensor (ASODVS), to address those problems. ASODVS is an integrated system composed of a single-view omni-directional vision sensor and a mobile planar green laser generator platform. Driven by a stepper motor, the laser platform can move vertically along the axis of the ASODVS. During the scanning of the laser generators, the panoramic images of the environment are captured and the characteristics and space location information of the laser points are calculated accordingly. Based on the image information of the laser points, the 3D space can be reconstructed. Experimental results demonstrate that the proposed ASODVS system can measure and reconstruct the 3D space in real-time and with high quality. (paper)

  17. The potential risk of personal stereo players

    DEFF Research Database (Denmark)

    Hammershøi, Dorte; Ordoñez, Rodrigo Pizarro; Reuter, Karen

    2010-01-01

    The technological development within personal stereo systems,such as MP3 players, e. g. iPods, has changed music listening habits from home entertainment to everyday and everywhere use. The technology has developed considerably, since the introduction of cassette players and CD walkmen. High......-level low-distortion music is produced by minimal devices which can play for long periods. In this paper, the existing literature on effects of personal stereo systems is reviewed, incl. studies of exposure levels, and effects on hearing. Generally, it is found that the levels being used are of concern......, which in one study is demonstrated to relate to the specific use in situations with high levels of background noise. Another study demonstrates that the effect of using personal stereo is comparable to that of being exposed to noise in industry. The results are discussed in view of the measurement...

  18. Binocular depth processing in the ventral visual pathway

    OpenAIRE

    Verhoef, Bram-Ernst; Vogels, Rufin; Janssen, Peter

    2016-01-01

    One of the most powerful forms of depth perception capitalizes on the small relative displacements, or binocular disparities, in the images projected onto each eye. The brain employs these disparities to facilitate various computations, including sensori-motor transformations (reaching, grasping), scene segmentation and object recognition. In accordance with these different functions, disparity activates a large number of regions in the brain of both humans and monkeys. Here, we review how di...

  19. Stereo-tomography in triangulated models

    Science.gov (United States)

    Yang, Kai; Shao, Wei-Dong; Xing, Feng-yuan; Xiong, Kai

    2018-04-01

    Stereo-tomography is a distinctive tomographic method. It is capable of estimating the scatterer position, the local dip of scatterer and the background velocity simultaneously. Building a geologically consistent velocity model is always appealing for applied and earthquake seismologists. Differing from the previous work to incorporate various regularization techniques into the cost function of stereo-tomography, we think extending stereo-tomography to the triangulated model will be the most straightforward way to achieve this goal. In this paper, we provided all the Fréchet derivatives of stereo-tomographic data components with respect to model components for slowness-squared triangulated model (or sloth model) in 2D Cartesian coordinate based on the ray perturbation theory for interfaces. A sloth model representation means a sparser model representation when compared with conventional B-spline model representation. A sparser model representation leads to a smaller scale of stereo-tomographic (Fréchet) matrix, a higher-accuracy solution when solving linear equations, a faster convergence rate and a lower requirement for quantity of data space. Moreover, a quantitative representation of interface strengthens the relationships among different model components, which makes the cross regularizations among these model components, such as node coordinates, scatterer coordinates and scattering angles, etc., more straightforward and easier to be implemented. The sensitivity analysis, the model resolution matrix analysis and a series of synthetic data examples demonstrate the correctness of the Fréchet derivatives, the applicability of the regularization terms and the robustness of the stereo-tomography in triangulated model. It provides a solid theoretical foundation for the real applications in the future.

  20. Binocular Rivalry in a Competitive Neural Network with Synaptic Depression

    KAUST Repository

    Kilpatrick, Zachary P.

    2010-01-01

    We study binocular rivalry in a competitive neural network with synaptic depression. In particular, we consider two coupled hypercolums within primary visual cortex (V1), representing orientation selective cells responding to either left or right eye inputs. Coupling between hypercolumns is dominated by inhibition, especially for neurons with dissimilar orientation preferences. Within hypercolumns, recurrent connectivity is excitatory for similar orientations and inhibitory for different orientations. All synaptic connections are modifiable by local synaptic depression. When the hypercolumns are driven by orthogonal oriented stimuli, it is possible to induce oscillations that are representative of binocular rivalry. We first analyze the occurrence of oscillations in a space-clamped version of the model using a fast-slow analys is, taking advantage of the fact that depression evolves much slower than population activity. We th en analyze the onset of oscillations in the full spatially extended system by carrying out a piecewise smooth stability analysis of single (winner-take-all) and double (fusion) bumps within the network. Although our stability analysis takes into account only instabilities associated with real eigenvalues, it identifies points of instability that are consistent with what is found numerically. In particular, we show that, in regions of parameter space where double bumps are unstable and no single bumps exist, binocular rivalry can arise as a slow alternation between either population supporting a bump. © 2010 Society for Industrial and Applied Mathematics.

  1. Standard Test Method for Measuring Binocular Disparity in Transparent Parts

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method covers the amount of binocular disparity that is induced by transparent parts such as aircraft windscreens, canopies, HUD combining glasses, visors, or goggles. This test method may be applied to parts of any size, shape, or thickness, individually or in combination, so as to determine the contribution of each transparent part to the overall binocular disparity present in the total “viewing system” being used by a human operator. 1.2 This test method represents one of several techniques that are available for measuring binocular disparity, but is the only technique that yields a quantitative figure of merit that can be related to operator visual performance. 1.3 This test method employs apparatus currently being used in the measurement of optical angular deviation under Method F 801. 1.4 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not con...

  2. Living with vision loss

    Science.gov (United States)

    Diabetes - vision loss; Retinopathy - vision loss; Low vision; Blindness - vision loss ... of visual aids. Some options include: Magnifiers High power reading glasses Devices that make it easier to ...

  3. Real-time vision systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  4. Preliminary results from the use of the novel Interactive binocular treatment (I-BiT) system, in the treatment of strabismic and anisometropic amblyopia.

    Science.gov (United States)

    Waddingham, P E; Butler, T K H; Cobb, S V; Moody, A D R; Comaish, I F; Haworth, S M; Gregson, R M; Ash, I M; Brown, S M; Eastgate, R M; Griffiths, G D

    2006-03-01

    We have developed a novel application of adapted virtual reality (VR) technology, for the binocular treatment of amblyopia. We describe the use of the system in six children. Subjects consisted of three conventional treatment 'failures' and three conventional treatment 'refusers', with a mean age of 6.25 years (5.42-7.75 years). Treatment consisted of watching video clips and playing interactive games with specifically designed software to allow streamed binocular image presentation. Initial vision in the amblyopic eye ranged from 6/12 to 6/120 and post-treatment 6/7.5 to 6/24-1. Total treatment time was a mean of 4.4 h. Five out of six children have shown an improvement in their vision (average increase of 10 letters), including those who had previously failed to comply with conventional occlusion. Improvements in vision were demonstrable within a short period of time, in some children after 1 h of treatment. This system is an exciting and promising application of VR technology as a new treatment for amblyopia.

  5. Micro Vision

    OpenAIRE

    Ohba, Kohtaro; Ohara, Kenichi

    2007-01-01

    In the field of the micro vision, there are few researches compared with macro environment. However, applying to the study result for macro computer vision technique, you can measure and observe the micro environment. Moreover, based on the effects of micro environment, it is possible to discovery the new theories and new techniques.

  6. Monocular perceptual learning of contrast detection facilitates binocular combination in adults with anisometropic amblyopia

    OpenAIRE

    Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin

    2016-01-01

    Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n?=?13) were asked to complete two psychophysical supra-threshold binoc...

  7. Head Pose Estimation from Passive Stereo Images

    DEFF Research Database (Denmark)

    Breitenstein, Michael D.; Jensen, Jeppe; Høilund, Carsten

    2009-01-01

    function. Our algorithm incorporates 2D and 3D cues to make the system robust to low-quality range images acquired by passive stereo systems. It handles large pose variations (of ±90 ° yaw and ±45 ° pitch rotation) and facial variations due to expressions or accessories. For a maximally allowed error of 30...

  8. Multi-hypothesis distributed stereo video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Zamarin, Marco; Forchhammer, Søren

    2013-01-01

    for stereo sequences, exploiting an interpolated intra-view SI and two inter-view SIs. The quality of the SI has a major impact on the DVC Rate-Distortion (RD) performance. As the inter-view SIs individually present lower RD performance compared with the intra-view SI, we propose multi-hypothesis decoding...

  9. Local Stereo Matching Using Adaptive Local Segmentation

    NARCIS (Netherlands)

    Damjanovic, S.; van der Heijden, Ferdinand; Spreeuwers, Lieuwe Jan

    We propose a new dense local stereo matching framework for gray-level images based on an adaptive local segmentation using a dynamic threshold. We define a new validity domain of the fronto-parallel assumption based on the local intensity variations in the 4-neighborhood of the matching pixel. The

  10. Visual memory for objects following foveal vision loss.

    Science.gov (United States)

    Geringswald, Franziska; Herbik, Anne; Hofmüller, Wolfram; Hoffmann, Michael B; Pollmann, Stefan

    2015-09-01

    Allocation of visual attention is crucial for encoding items into visual long-term memory. In free vision, attention is closely linked to the center of gaze, raising the question whether foveal vision loss entails suboptimal deployment of attention and subsequent impairment of object encoding. To investigate this question, we examined visual long-term memory for objects in patients suffering from foveal vision loss due to age-related macular degeneration. We measured patients' change detection sensitivity after a period of free scene exploration monocularly with their worse eye when possible, and under binocular vision, comparing sensitivity and eye movements to matched normal-sighted controls. A highly salient cue was used to capture attention to a nontarget location before a target change occurred in half of the trials, ensuring that change detection relied on memory. Patients' monocular and binocular sensitivity to object change was comparable to controls, even after more than 4 intervening fixations, and not significantly correlated with visual impairment. We conclude that extrafoveal vision suffices for efficient encoding into visual long-term memory. (c) 2015 APA, all rights reserved).

  11. Stereoselectivity in metallocene-catalyzed coordination polymerization of renewable methylene butyrolactones: From stereo-random to stereo-perfect polymers

    KAUST Repository

    Chen, Xia; Caporaso, Lucia; Cavallo, Luigi; Chen, Eugene You Xian

    2012-01-01

    Coordination polymerization of renewable α-methylene-γ-(methyl) butyrolactones by chiral C 2-symmetric zirconocene catalysts produces stereo-random, highly stereo-regular, or perfectly stereo-regular polymers, depending on the monomer and catalyst structures. Computational studies yield a fundamental understanding of the stereocontrol mechanism governing these new polymerization reactions mediated by chiral metallocenium catalysts. © 2012 American Chemical Society.

  12. Stereoselectivity in metallocene-catalyzed coordination polymerization of renewable methylene butyrolactones: From stereo-random to stereo-perfect polymers

    KAUST Repository

    Chen, Xia

    2012-05-02

    Coordination polymerization of renewable α-methylene-γ-(methyl) butyrolactones by chiral C 2-symmetric zirconocene catalysts produces stereo-random, highly stereo-regular, or perfectly stereo-regular polymers, depending on the monomer and catalyst structures. Computational studies yield a fundamental understanding of the stereocontrol mechanism governing these new polymerization reactions mediated by chiral metallocenium catalysts. © 2012 American Chemical Society.

  13. Binocular eye movement control and motion perception: what is being tracked?

    Science.gov (United States)

    van der Steen, Johannes; Dits, Joyce

    2012-10-19

    We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking.

  14. Dynamic Programming and Graph Algorithms in Computer Vision*

    Science.gov (United States)

    Felzenszwalb, Pedro F.; Zabih, Ramin

    2013-01-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting, since by carefully exploiting problem structure they often provide non-trivial guarantees concerning solution quality. In this paper we briefly review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo; the mid-level problem of interactive object segmentation; and the high-level problem of model-based recognition. PMID:20660950

  15. Handbook of 3D machine vision optical metrology and imaging

    CERN Document Server

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  16. Predicting Vision-Related Disability in Glaucoma.

    Science.gov (United States)

    Abe, Ricardo Y; Diniz-Filho, Alberto; Costa, Vital P; Wu, Zhichao; Medeiros, Felipe A

    2018-01-01

    To present a new methodology for investigating predictive factors associated with development of vision-related disability in glaucoma. Prospective, observational cohort study. Two hundred thirty-six patients with glaucoma followed up for an average of 4.3±1.5 years. Vision-related disability was assessed by the 25-item National Eye Institute Visual Function Questionnaire (NEI VFQ-25) at baseline and at the end of follow-up. A latent transition analysis model was used to categorize NEI VFQ-25 results and to estimate the probability of developing vision-related disability during follow-up. Patients were tested with standard automated perimetry (SAP) at 6-month intervals, and evaluation of rates of visual field change was performed using mean sensitivity (MS) of the integrated binocular visual field. Baseline disease severity, rate of visual field loss, and duration of follow-up were investigated as predictive factors for development of disability during follow-up. The relationship between baseline and rates of visual field deterioration and the probability of vision-related disability developing during follow-up. At baseline, 67 of 236 (28%) glaucoma patients were classified as disabled based on NEI VFQ-25 results, whereas 169 (72%) were classified as nondisabled. Patients classified as nondisabled at baseline had 14.2% probability of disability developing during follow-up. Rates of visual field loss as estimated by integrated binocular MS were almost 4 times faster for those in whom disability developed versus those in whom it did not (-0.78±1.00 dB/year vs. -0.20±0.47 dB/year, respectively; P disability developing over time (odds ratio [OR], 1.34; 95% confidence interval [CI], 1.06-1.70; P = 0.013). In addition, each 0.5-dB/year faster rate of loss of binocular MS during follow-up was associated with a more than 3.5 times increase in the risk of disability developing (OR, 3.58; 95% CI, 1.56-8.23; P = 0.003). A new methodology for classification and analysis

  17. Stereo-particle image velocimetry uncertainty quantification

    International Nuclear Information System (INIS)

    Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  18. Looking above the prairie: localized and upward acute vision in a native grassland bird.

    Science.gov (United States)

    Tyrrell, Luke P; Moore, Bret A; Loftis, Christopher; Fernández-Juricic, Esteban

    2013-12-02

    Visual systems of open habitat vertebrates are predicted to have a band of acute vision across the retina (visual streak) and wide visual coverage to gather information along the horizon. We tested whether the eastern meadowlark (Sturnella magna) had this visual configuration given that it inhabits open grasslands. Contrary to our expectations, the meadowlark retina has a localized spot of acute vision (fovea) and relatively narrow visual coverage. The fovea projects above rather than towards the horizon with the head at rest, and individuals modify their body posture in tall grass to maintain a similar foveal projection. Meadowlarks have relatively large binocular fields and can see their bill tips, which may help with their probe-foraging technique. Overall, meadowlark vision does not fit the profile of vertebrates living in open habitats. The binocular field may control foraging while the fovea may be used for detecting and tracking aerial stimuli (predators, conspecifics).

  19. Multi-UAV joint target recognizing based on binocular vision theory

    Directory of Open Access Journals (Sweden)

    Yuan Zhang

    2017-01-01

    Full Text Available Target recognizing of unmanned aerial vehicle (UAV based on image processing take the advantage of 2D information containing in the image for identifying the target. Compare to single UAV with electrical optical tracking system (EOTS, multi-UAV with EOTS is able to take a group of image focused on the suspected target from multiple view point. Benefit from matching each couple of image in this group, points set constituted by matched feature points implicates the depth of each point. Coordinate of target feature points could be computing from depth of feature points. This depth information makes up a cloud of points and reconstructed an exclusive 3D model to recognizing system. Considering the target recognizing do not require precise target model, the cloud of feature points was regrouped into n subsets and reconstructed to a semi-3D model. Casting these subsets in a Cartesian coordinate and applying these projections in convolutional neural networks (CNN respectively, the integrated output of networks is the improved result of recognizing.

  20. Recent developments for the Large Binocular Telescope Guiding Control Subsystem

    Science.gov (United States)

    Golota, T.; De La Peña, M. D.; Biddick, C.; Lesser, M.; Leibold, T.; Miller, D.; Meeks, R.; Hahn, T.; Storm, J.; Sargent, T.; Summers, D.; Hill, J.; Kraus, J.; Hooper, S.; Fisher, D.

    2014-07-01

    The Large Binocular Telescope (LBT) has eight Acquisition, Guiding, and wavefront Sensing Units (AGw units). They provide guiding and wavefront sensing capability at eight different locations at both direct and bent Gregorian focal stations. Recent additions of focal stations for PEPSI and MODS instruments doubled the number of focal stations in use including respective motion, camera controller server computers, and software infrastructure communicating with Guiding Control Subsystem (GCS). This paper describes the improvements made to the LBT GCS and explains how these changes have led to better maintainability and contributed to increased reliability. This paper also discusses the current GCS status and reviews potential upgrades to further improve its performance.

  1. Magnitude, precision, and realism of depth perception in stereoscopic vision.

    Science.gov (United States)

    Hibbard, Paul B; Haines, Alice E; Hornsey, Rebecca L

    2017-01-01

    Our perception of depth is substantially enhanced by the fact that we have binocular vision. This provides us with more precise and accurate estimates of depth and an improved qualitative appreciation of the three-dimensional (3D) shapes and positions of objects. We assessed the link between these quantitative and qualitative aspects of 3D vision. Specifically, we wished to determine whether the realism of apparent depth from binocular cues is associated with the magnitude or precision of perceived depth and the degree of binocular fusion. We presented participants with stereograms containing randomly positioned circles and measured how the magnitude, realism, and precision of depth perception varied with the size of the disparities presented. We found that as the size of the disparity increased, the magnitude of perceived depth increased, while the precision with which observers could make depth discrimination judgments decreased. Beyond an initial increase, depth realism decreased with increasing disparity magnitude. This decrease occurred well below the disparity limit required to ensure comfortable viewing.

  2. A two-level real-time vision machine combining coarse and fine grained parallelism

    DEFF Research Database (Denmark)

    Jensen, Lars Baunegaard With; Kjær-Nielsen, Anders; Pauwels, Karl

    2010-01-01

    In this paper, we describe a real-time vision machine having a stereo camera as input generating visual information on two different levels of abstraction. The system provides visual low-level and mid-level information in terms of dense stereo and optical flow, egomotion, indicating areas...... a factor 90 and a reduction of latency of a factor 26 compared to processing on a single CPU--core. Since the vision machine provides generic visual information it can be used in many contexts. Currently it is used in a driver assistance context as well as in two robotic applications....

  3. Depth of Monocular Elements in a Binocular Scene: The Conditions for da Vinci Stereopsis

    Science.gov (United States)

    Cook, Michael; Gillam, Barbara

    2004-01-01

    Quantitative depth based on binocular resolution of visibility constraints is demonstrated in a novel stereogram representing an object, visible to 1 eye only, and seen through an aperture or camouflaged against a background. The monocular region in the display is attached to the binocular region, so that the stereogram represents an object which…

  4. The future of binocular rivalry research: reaching through a window on consciousness

    NARCIS (Netherlands)

    Klink, P. Christiaan; van Wezel, Richard Jack Anton; van Ee, Raymond; Miller, Steven M.

    2013-01-01

    Binocular rivalry is often considered an experimental window on the neural processes of consciousness. We propose three distinct approaches to exploit this window. First, one may look through the window, using binocular rivalry as a passive tool to dissociate unaltered sensory input from wavering

  5. Evaluation of binocular function among pre- and early-presbyopes with asthenopia

    Directory of Open Access Journals (Sweden)

    Reindel W

    2018-01-01

    Full Text Available William Reindel,1 Lening Zhang,1 Joseph Chinn,2 Marjorie Rah1 1Vision Care, Bausch & Lomb Inc, Rochester, NY, 2J Chinn LLC, Lafayette, CO, USA Purpose: Individuals approaching presbyopia may exhibit ocular symptoms as they contend with visual demands of near work, coupled with natural age-related changes in accommodation. Therefore, accommodation and vergence of 30- to 40-year-old, myopic, soft contact lens wearing subjects with symptoms of asthenopia and no history of using multifocal lenses were evaluated.Patients and methods: In this prospective, observational study, 253 subjects with asthenopia were evaluated by 25 qualified practitioners, each at a different clinical site. Subjects were 30–40 years in age, had symptoms of soreness, eyestrain, tired eyes, or headaches with near work, regularly performed 2–3 consecutive hours of near work, and were undiagnosed with presbyopia. Amplitude of accommodation (AC and near point convergence (NPC were measured with a Royal Air Force binocular gauge. Triplicate push up and push down AC and NPC measures were recorded, and average AC values were compared to those calculated using the Hofstetter formula (HF. Results: The average AC push up/push down value was significantly better than the HF prediction for this age range (8.04±3.09 vs 6.23±0.80 D, although 22.5% of subjects had mean AC below their HF value (5.36±0.99 D. The average NPC push up/push down value was 12.0±4.69 cm. The mean binocular AC value using the push up measure was significantly better than the push down measure (8.5±3.4 vs 7.6±3.0 D. The mean NPC value using the push up measure was significantly worse than the push down measure (13.0±5.0 vs 11.0±4.7 cm. The most frequent primary diagnosis was ill-sustained accommodation (54%, followed by accommodative insufficiency (18%, and accommodative infacility (12%. Conclusion: Based upon a standardized assessment of accommodation and vergence, ill-sustained accommodation was the

  6. Inexpensive driver for stereo videogame glasses

    Science.gov (United States)

    Pique, Michael; Coogan, Anthony

    1990-09-01

    We have adapted home videogame glasses from Sega as workstation stereo viewers. A small (4x7x9 cm.) box of electronics receives sync signals in parallel with the monitor (either separate ROB-Sync or composite video) and drives the glasses.The view is dimmer than with costlier shutters, there is more ghosting, and the user is feuered by the wires. But the glasses are so much cheaper than the full-screen shutters (250 instead of about 10 000) that it is practical to provide the benefits of stereo to many more workstation users. We are using them with Sun TAAC-1 workstations; the interlaced video can also be recorded on ordinary NTSC videotape and played on television monitors.

  7. Stereo Viewing System. Innovative Technology Summary Report

    International Nuclear Information System (INIS)

    None

    2000-01-01

    The Stereo Viewing System provides stereoscopic viewing of Light Duty Utility Arm activities. Stereoscopic viewing allows operators to see the depth of objects. This capability improves the control of the Light Duty Utility Arm performed in DOE's underground radioactive waste storage tanks and allows operators to evaluate the depth of pits, seams, and other anomalies. Potential applications include Light Duty Utility Arm deployment operations at the Oak Ridge Reservation, Hanford Site, and the Idaho National Engineering and Environmental Laboratory

  8. Crossmodal Semantic Constraints on Visual Perception of Binocular Rivalry

    Directory of Open Access Journals (Sweden)

    Yi-Chuan Chen

    2011-10-01

    Full Text Available Environments typically convey contextual information via several different sensory modalities. Here, we report a study designed to investigate the crossmodal semantic modulation of visual perception using the binocular rivalry paradigm. The participants viewed a dichoptic figure consisting of a bird and a car presented to each eye, while also listening to either a bird singing or car engine revving. Participants' dominant percepts were modulated by the presentation of a soundtrack associated with either bird or car, as compared to the presentation of a soundtrack irrelevant to both visual figures (tableware clattering together in a restaurant. No such crossmodal semantic effect was observed when the participants maintained an abstract semantic cue in memory. We then further demonstrate that crossmodal semantic modulation can be dissociated from the effects of high-level attentional control over the dichoptic figures and of low-level luminance contrast of the figures. In sum, we demonstrate a novel crossmodal effect in terms of crossmodal semantic congruency on binocular rivalry. This effect can be considered a perceptual grouping or contextual constraint on human visual awareness through mid-level crossmodal excitatory connections embedded in the multisensory semantic network.

  9. Separating monocular and binocular neural mechanisms mediating chromatic contextual interactions.

    Science.gov (United States)

    D'Antona, Anthony D; Christiansen, Jens H; Shevell, Steven K

    2014-04-17

    When seen in isolation, a light that varies in chromaticity over time is perceived to oscillate in color. Perception of that same time-varying light may be altered by a surrounding light that is also temporally varying in chromaticity. The neural mechanisms that mediate these contextual interactions are the focus of this article. Observers viewed a central test stimulus that varied in chromaticity over time within a larger surround that also varied in chromaticity at the same temporal frequency. Center and surround were presented either to the same eye (monocular condition) or to opposite eyes (dichoptic condition) at the same frequency (3.125, 6.25, or 9.375 Hz). Relative phase between center and surround modulation was varied. In both the monocular and dichoptic conditions, the perceived modulation depth of the central light depended on the relative phase of the surround. A simple model implementing a linear combination of center and surround modulation fit the measurements well. At the lowest temporal frequency (3.125 Hz), the surround's influence was virtually identical for monocular and dichoptic conditions, suggesting that at this frequency, the surround's influence is mediated primarily by a binocular neural mechanism. At higher frequencies, the surround's influence was greater for the monocular condition than for the dichoptic condition, and this difference increased with temporal frequency. Our findings show that two separate neural mechanisms mediate chromatic contextual interactions: one binocular and dominant at lower temporal frequencies and the other monocular and dominant at higher frequencies (6-10 Hz).

  10. Moving toward queue operations at the Large Binocular Telescope Observatory

    Science.gov (United States)

    Edwards, Michelle L.; Summers, Doug; Astier, Joseph; Suarez Sola, Igor; Veillet, Christian; Power, Jennifer; Cardwell, Andrew; Walsh, Shane

    2016-07-01

    The Large Binocular Telescope Observatory (LBTO), a joint scientific venture between the Instituto Nazionale di Astrofisica (INAF), LBT Beteiligungsgesellschaft (LBTB), University of Arizona, Ohio State University (OSU), and the Research Corporation, is one of the newest additions to the world's collection of large optical/infrared ground-based telescopes. With its unique, twin 8.4m mirror design providing a 22.8 meter interferometric baseline and the collecting area of an 11.8m telescope, LBT has a window of opportunity to exploit its singular status as the "first" of the next generation of Extremely Large Telescopes (ELTs). Prompted by urgency to maximize scientific output during this favorable interval, LBTO recently re-evaluated its operations model and developed a new strategy that augments classical observing with queue. Aided by trained observatory staff, queue mode will allow for flexible, multi-instrument observing responsive to site conditions. Our plan is to implement a staged rollout that will provide many of the benefits of queue observing sooner rather than later - with more bells and whistles coming in future stages. In this paper, we outline LBTO's new scientific model, focusing specifically on our "lean" resourcing and development, reuse and adaptation of existing software, challenges presented from our one-of-a-kind binocular operations, and lessons learned. We also outline further stages of development and our ultimate goals for queue.

  11. Remote landslide mapping using a laser rangefinder binocular and GPS

    Directory of Open Access Journals (Sweden)

    M. Santangelo

    2010-12-01

    Full Text Available We tested a high-quality laser rangefinder binocular coupled with a GPS receiver connected to a Tablet PC running dedicated software to help recognize and map in the field recent rainfall-induced landslides. The system was tested in the period between March and April 2010, in the Monte Castello di Vibio area, Umbria, Central Italy. To test the equipment, we measured thirteen slope failures that were mapped previously during a visual reconnaissance field campaign conducted in February and March 2010. For reference, four slope failures were also mapped by walking the GPS receiver along the landslide perimeter. Comparison of the different mappings revealed that the geographical information obtained remotely for each landslide by the rangefinder binocular and GPS was comparable to the information obtained by walking the GPS around the landslide perimeter, and was superior to the information obtained through the visual reconnaissance mapping. Although our tests were not exhaustive, we maintain that the system is effective to map recent rainfall induced landslides in the field, and we foresee the possibility of using the same (or similar system to map landslides, and other geomorphological features, in other areas.

  12. Stereo-Based Visual Odometry for Autonomous Robot Navigation

    Directory of Open Access Journals (Sweden)

    Ioannis Kostavelis

    2016-02-01

    Full Text Available Mobile robots should possess accurate self-localization capabilities in order to be successfully deployed in their environment. A solution to this challenge may be derived from visual odometry (VO, which is responsible for estimating the robot's pose by analysing a sequence of images. The present paper proposes an accurate, computationally-efficient VO algorithm relying solely on stereo vision images as inputs. The contribution of this work is twofold. Firstly, it suggests a non-iterative outlier detection technique capable of efficiently discarding the outliers of matched features. Secondly, it introduces a hierarchical motion estimation approach that produces refinements to the global position and orientation for each successive step. Moreover, for each subordinate module of the proposed VO algorithm, custom non-iterative solutions have been adopted. The accuracy of the proposed system has been evaluated and compared with competent VO methods along DGPS-assessed benchmark routes. Experimental results of relevance to rough terrain routes, including both simulated and real outdoors data, exhibit remarkable accuracy, with positioning errors lower than 2%.

  13. Human machine interface by using stereo-based depth extraction

    Science.gov (United States)

    Liao, Chao-Kang; Wu, Chi-Hao; Lin, Hsueh-Yi; Chang, Ting-Ting; Lin, Tung-Yang; Huang, Po-Kuan

    2014-03-01

    The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time­ of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.

  14. Opportunity's Surroundings on Sol 1798 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11850 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11850 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo 180-degree view of the rover's surroundings during the 1,798th Martian day, or sol, of Opportunity's surface mission (Feb. 13, 2009). North is on top. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 111 meters (364 feet) southward on the preceding sol. Tracks from that drive recede northward in this view. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  15. Spirit Beside 'Home Plate,' Sol 1809 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11803 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11803 NASA Mars Exploration Rover Spirit used its navigation camera to take the images assembled into this stereo, 120-degree view southward after a short drive during the 1,809th Martian day, or sol, of Spirit's mission on the surface of Mars (February 3, 2009). By combining images from the left-eye and right-eye sides of the navigation camera, the view appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Spirit had driven about 2.6 meters (8.5 feet) that sol, continuing a clockwise route around a low plateau called 'Home Plate.' In this image, the rocks visible above the rovers' solar panels are on the slope at the northern edge of Home Plate. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  16. Opportunity's Surroundings on Sol 1687 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11739 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11739 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, 360-degree view of the rover's surroundings on the 1,687th Martian day, or sol, of its surface mission (Oct. 22, 2008). The view appears three-dimensional when viewed through red-blue glasses. Opportunity had driven 133 meters (436 feet) that sol, crossing sand ripples up to about 10 centimeters (4 inches) tall. The tracks visible in the foreground are in the east-northeast direction. Opportunity's position on Sol 1687 was about 300 meters southwest of Victoria Crater. The rover was beginning a long trek toward a much larger crater, Endeavour, about 12 kilometers (7 miles) to the southeast. This panorama combines right-eye and left-eye views presented as cylindrical-perspective projections with geometric seam correction.

  17. Merged Shape from Shading and Shape from Stereo for Planetary Topographic Mapping

    Science.gov (United States)

    Tyler, Laurence; Cook, Tony; Barnes, Dave; Parr, Gerhard; Kirk, Randolph

    2014-05-01

    Digital Elevation Models (DEMs) of the Moon and Mars have traditionally been produced from stereo imagery from orbit, or from the surface landers or rovers. One core component of image-based DEM generation is stereo matching to find correspondences between images taken from different viewpoints. Stereo matchers that rely mostly on textural features in the images can fail to find enough matched points in areas lacking in contrast or surface texture. This can lead to blank or topographically noisy areas in resulting DEMs. Fine depth detail may also be lacking due to limited precision and quantisation of the pixel matching process. Shape from shading (SFS), a two dimensional version of photoclinometry, utilizes the properties of light reflecting off surfaces to build up localised slope maps, which can subsequently be combined to extract topography. This works especially well on homogeneous surfaces and can recover fine detail. However the cartographic accuracy can be affected by changes in brightness due to differences in surface material, albedo and light scattering properties, and also by the presence of shadows. We describe here experimental research for the Planetary Robotics Vision Data Exploitation EU FP7 project (PRoViDE) into using stereo generated depth maps in conjunction with SFS to recover both coarse and fine detail of planetary surface DEMs. Our Large Deformation Optimisation Shape From Shading (LDOSFS) algorithm uses image data, illumination, viewing geometry and camera parameters to produce a DEM. A stereo-derived depth map can be used as an initial seed if available. The software uses separate Bidirectional Reflectance Distribution Function (BRDF) and SFS modules for iterative processing and to make the code more portable for future development. Three BRDF models are currently implemented: Lambertian, Blinn-Phong, and Oren-Nayar. A version of the Hapke reflectance function, which is more appropriate for planetary surfaces, is under development

  18. Multi-focal Vision and Gaze Control Improve Navigation Performance

    Directory of Open Access Journals (Sweden)

    Kolja Kuehnlenz

    2008-11-01

    Full Text Available Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies clearly show the benefits of multi-focal vision for mobile robot navigation: flexibility to assign the different available sensors optimally in each situation, enhancement of the visible field, higher localization accuracy, and, thus, better task performance, i.e. path following behavior of the mobile robot. It is shown that multi-focal vision may strongly improve navigation performance.

  19. A long baseline global stereo matching based upon short baseline estimation

    Science.gov (United States)

    Li, Jing; Zhao, Hong; Li, Zigang; Gu, Feifei; Zhao, Zixin; Ma, Yueyang; Fang, Meiqi

    2018-05-01

    In global stereo vision, balancing the matching efficiency and computing accuracy seems to be impossible because they contradict each other. In the case of a long baseline, this contradiction becomes more prominent. In order to solve this difficult problem, this paper proposes a novel idea to improve both the efficiency and accuracy in global stereo matching for a long baseline. In this way, the reference images located between the long baseline image pairs are firstly chosen to form the new image pairs with short baselines. The relationship between the disparities of pixels in the image pairs with different baselines is revealed by considering the quantized error so that the disparity search range under the long baseline can be reduced by guidance of the short baseline to gain matching efficiency. Then, the novel idea is integrated into the graph cuts (GCs) to form a multi-step GC algorithm based on the short baseline estimation, by which the disparity map under the long baseline can be calculated iteratively on the basis of the previous matching. Furthermore, the image information from the pixels that are non-occluded under the short baseline but are occluded for the long baseline can be employed to improve the matching accuracy. Although the time complexity of the proposed method depends on the locations of the chosen reference images, it is usually much lower for a long baseline stereo matching than when using the traditional GC algorithm. Finally, the validity of the proposed method is examined by experiments based on benchmark datasets. The results show that the proposed method is superior to the traditional GC method in terms of efficiency and accuracy, and thus it is suitable for long baseline stereo matching.

  20. Investigation of 1 : 1,000 Scale Map Generation by Stereo Plotting Using Uav Images

    Science.gov (United States)

    Rhee, S.; Kim, T.

    2017-08-01

    Large scale maps and image mosaics are representative geospatial data that can be extracted from UAV images. Map drawing using UAV images can be performed either by creating orthoimages and digitizing them, or by stereo plotting. While maps generated by digitization may serve the need for geospatial data, many institutions and organizations require map drawing using stereoscopic vision on stereo plotting systems. However, there are several aspects to be checked for UAV images to be utilized for stereo plotting. The first aspect is the accuracy of exterior orientation parameters (EOPs) generated through automated bundle adjustment processes. It is well known that GPS and IMU sensors mounted on a UAV are not very accurate. It is necessary to adjust initial EOPs accurately using tie points. For this purpose, we have developed a photogrammetric incremental bundle adjustment procedure. The second aspect is unstable shooting conditions compared to aerial photographing. Unstable image acquisition may bring uneven stereo coverage, which will result in accuracy loss eventually. Oblique stereo pairs will create eye fatigue. The third aspect is small coverage of UAV images. This aspect will raise efficiency issue for stereo plotting of UAV images. More importantly, this aspect will make contour generation from UAV images very difficult. This paper will discuss effects relate to these three aspects. In this study, we tried to generate 1 : 1,000 scale map from the dataset using EOPs generated from software developed in-house. We evaluated Y-disparity of the tie points extracted automatically through the photogrammetric incremental bundle adjustment process. We could confirm that stereoscopic viewing is possible. Stereoscopic plotting work was carried out by a professional photogrammetrist. In order to analyse the accuracy of the map drawing using stereoscopic vision, we compared the horizontal and vertical position difference between adjacent models after drawing a specific

  1. INVESTIGATION OF 1 : 1,000 SCALE MAP GENERATION BY STEREO PLOTTING USING UAV IMAGES

    Directory of Open Access Journals (Sweden)

    S. Rhee

    2017-08-01

    Full Text Available Large scale maps and image mosaics are representative geospatial data that can be extracted from UAV images. Map drawing using UAV images can be performed either by creating orthoimages and digitizing them, or by stereo plotting. While maps generated by digitization may serve the need for geospatial data, many institutions and organizations require map drawing using stereoscopic vision on stereo plotting systems. However, there are several aspects to be checked for UAV images to be utilized for stereo plotting. The first aspect is the accuracy of exterior orientation parameters (EOPs generated through automated bundle adjustment processes. It is well known that GPS and IMU sensors mounted on a UAV are not very accurate. It is necessary to adjust initial EOPs accurately using tie points. For this purpose, we have developed a photogrammetric incremental bundle adjustment procedure. The second aspect is unstable shooting conditions compared to aerial photographing. Unstable image acquisition may bring uneven stereo coverage, which will result in accuracy loss eventually. Oblique stereo pairs will create eye fatigue. The third aspect is small coverage of UAV images. This aspect will raise efficiency issue for stereo plotting of UAV images. More importantly, this aspect will make contour generation from UAV images very difficult. This paper will discuss effects relate to these three aspects. In this study, we tried to generate 1 : 1,000 scale map from the dataset using EOPs generated from software developed in-house. We evaluated Y-disparity of the tie points extracted automatically through the photogrammetric incremental bundle adjustment process. We could confirm that stereoscopic viewing is possible. Stereoscopic plotting work was carried out by a professional photogrammetrist. In order to analyse the accuracy of the map drawing using stereoscopic vision, we compared the horizontal and vertical position difference between adjacent models after

  2. Industrial vision

    DEFF Research Database (Denmark)

    Knudsen, Ole

    1998-01-01

    This dissertation is concerned with the introduction of vision-based application s in the ship building industry. The industrial research project is divided into a natural seq uence of developments, from basic theoretical projective image generation via CAD and subpixel analysis to a description...... is present ed, and the variability of the parameters is examined and described. The concept of using CAD together with vision information is based on the fact that all items processed at OSS have an associated complete 3D CAD model that is accessible at all production states. This concept gives numerous...... possibilities for using vision in applications which otherwise would be very difficult to automate. The requirement for low tolerances in production is, despite the huge dimensions of the items involved, extreme. This fact makes great demands on the ability to do robust sub pixel estimation. A new method based...

  3. Delusion and bi-ocular vision.

    Science.gov (United States)

    De Masi, Franco

    2015-10-01

    The delusional experience is the result of a grave disjunction in the psyche whose outcome is not readily predictable. Examination of the specific mode of disjunction may help us understand the nature and radical character of delusion. I will present the therapy of a psychotic patient who after many years of analysis and progresses in his life continues to show delusional episodes although limited and contained. In his case, the two visions, one delusional and the other real, remain distinct and differentiated from each other because they both possess the same perceptual character, that of reality. He has a bi-ocular vision of reality and not a binocular one because his vision lacks integration, as would necessarily be the case if the two visions could be compared with each other. The principle of non-contradiction ceases to apply in delusion. A corollary of the failure of the principle of non-contradiction is that, if a statement and its negation are both true, then any statement is true. Logicians call this consequence the principle of explosion. For this reason, the distinction between truth, reality, improbability, probability, possibility and impossibility is lost in the delusional system, thus triggering an omnipotent, explosive mechanism with a potentially infinite progression. The paper presents some thoughts for a possible analytic transformation of the delusional experience. Copyright © 2015 Institute of Psychoanalysis.

  4. Contrast-balanced binocular treatment in children with deprivation amblyopia.

    Science.gov (United States)

    Hamm, Lisa M; Chen, Zidong; Li, Jinrong; Dai, Shuan; Black, Joanna; Yuan, Junpeng; Yu, Minbin; Thompson, Benjamin

    2017-11-28

    Children with deprivation amblyopia due to childhood cataract have been excluded from much of the emerging research into amblyopia treatment. An investigation was conducted to determine whether contrast-balanced binocular treatment - a strategy currently being explored for children with anisometropic and strabismic amblyopia - may be effective in children with deprivation amblyopia. An unmasked, case-series design intended to assess proof of principle was employed. Eighteen children with deprivation amblyopia due to childhood cataracts (early bilateral n = 7, early unilateral n = 7, developmental n = 4), as well as 10 children with anisometropic (n = 8) or mixed anisometropic and strabismic amblyopia (n = 2) were prescribed one hour a day of treatment over a six-week period. Supervised treatment was available. Visual acuity, contrast sensitivity, global motion perception and interocular suppression were measured pre- and post-treatment. Visual acuity improvements occurred in the anisometropic/strabismic group (0.15 ± 0.05 logMAR, p = 0.014), but contrast sensitivity did not change. As a group, children with deprivation amblyopia had a smaller but statistically significant improvement in weaker eye visual acuity (0.09 ± 0.03 logMAR, p = 0.004), as well a significant improvement in weaker eye contrast sensitivity (p = 0.004). Subgroup analysis suggested that the children with early bilateral deprivation had the largest improvements, while children with early unilateral cataract did not improve. Interestingly, binocular contrast sensitivity also improved in children with early bilateral deprivation. Global motion perception improved for both subgroups with early visual deprivation, as well as children with anisometropic or mixed anisometropic/strabismic amblyopia. Interocular suppression improved for all subgroups except children with early unilateral deprivation. These data suggest that supervised contrast-balanced binocular

  5. Agrarian Visions.

    Science.gov (United States)

    Theobald, Paul

    A new feature in "Country Teacher,""Agrarian Visions" reminds rural teachers that they can do something about rural decline. Like to populism of the 1890s, the "new populism" advocates rural living. Current attempts to address rural decline are contrary to agrarianism because: (1) telecommunications experts seek to…

  6. Fractured Visions

    DEFF Research Database (Denmark)

    Bonde, Inger Ellekilde

    2016-01-01

    In the post-war period a heterogeneous group of photographers articulate a new photographic approach to the city as motive in a photographic language that combines intense formalism with subjective vision. This paper analyses the photobook Fragments of a City published in 1960 by Danish photograp...

  7. Embodied Visions

    DEFF Research Database (Denmark)

    Grodal, Torben Kragh

    Embodied Visions presents a groundbreaking analysis of film through the lens of bioculturalism, revealing how human biology as well as human culture determine how films are made and experienced. Throughout the book the author uses the breakthroughs of modern brain science to explain general featu...

  8. Vision Screening

    Science.gov (United States)

    ... an efficient and cost-effective method to identify children with visual impairment or eye conditions that are likely to lead ... main goal of vision screening is to identify children who have or are at ... visual impairment unless treated in early childhood. Other problems that ...

  9. Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity

    Science.gov (United States)

    Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-jin

    2017-01-01

    Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available. PMID:28912739

  10. Natural Tendency towards Beauty in Humans: Evidence from Binocular Rivalry.

    Science.gov (United States)

    Mo, Ce; Xia, Tiansheng; Qin, Kaixin; Mo, Lei

    2016-01-01

    Although human preference for beauty is common and compelling in daily life, it remains unknown whether such preference is essentially subserved by social cognitive demands or natural tendency towards beauty encoded in the human mind intrinsically. Here we demonstrate experimentally that humans automatically exhibit preference for visual and moral beauty without explicit cognitive efforts. Using a binocular rivalry paradigm, we identified enhanced gender-independent perceptual dominance for physically attractive persons, and the results suggested universal preference for visual beauty based on perceivable forms. Moreover, we also identified perceptual dominance enhancement for characters associated with virtuous descriptions after controlling for facial attractiveness and vigilance-related attention effects, which suggested a similar implicit preference for moral beauty conveyed in prosocial behaviours. Our findings show that behavioural preference for beauty is driven by an inherent natural tendency towards beauty in humans rather than explicit social cognitive processes.

  11. Binocular diplopia in a tertiary hospital: Aetiology, diagnosis and treatment.

    Science.gov (United States)

    Merino, P; Fuentes, D; Gómez de Liaño, P; Ordóñez, M A

    2017-12-01

    To study the causes, diagnosis and treatment in a case series of binocular diplopia. A retrospective chart review was performed on patients seen in the Diplopia Unit of a tertiary centre during a one-year period. Diplopia was classified as: acute≤1 month since onset; subacute (1-6 months); and chronic (>6 months). Resolution of diplopia was classified as: spontaneous if it disappeared without treatment, partial if the course was intermittent, and non-spontaneous if treatment was required. It was considered a good outcome when diplopia disappeared completely (with or without treatment), or when diplopia was intermittent without significantly affecting the quality of life. A total of 60 cases were included. The mean age was 58.65 years (60% female). An acute or subacute presentation was observed in 60% of the patients. The mean onset of diplopia was 82.97 weeks. The most frequent aetiology was ischaemic (45%). The most frequent diagnosis was sixth nerve palsy (38.3%), followed by decompensated strabismus (30%). Neuroimaging showed structural lesions in 17.7% of the patients. There was a spontaneous resolution in 28.3% of the cases, and there was a good outcome with disappearance of the diplopia in 53.3% at the end of the study. The most frequent causes of binocular diplopia were cranial nerve palsies, especially the sixth cranial nerve, followed by decompensated strabismus. Structural lesions in imaging tests were more than expected. Only one third of patients had a spontaneous resolution, and half of them did not have a good outcome despite of treatment. Copyright © 2017 Sociedad Española de Oftalmología. Publicado por Elsevier España, S.L.U. All rights reserved.

  12. Pancam Peek into 'Victoria Crater' (Stereo)

    Science.gov (United States)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA08776 [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA08776 A drive of about 60 meters (about 200 feet) on the 943rd Martian day, or sol, of Opportunity's exploration of Mars' Meridiani Planum region (Sept. 18, 2006) brought the NASA rover to within about 50 meters (about 160 feet) of the rim of 'Victoria Crater.' This crater has been the mission's long-term destination for the past 21 Earth months. Opportunity reached a location from which the cameras on top of the rover's mast could begin to see into the interior of Victoria. This stereo anaglyph was made from frames taken on sol 943 by the panoramic camera (Pancam) to offer a three-dimensional view when seen through red-blue glasses. It shows the upper portion of interior crater walls facing toward Opportunity from up to about 850 meters (half a mile) away. The amount of vertical relief visible at the top of the interior walls from this angle is about 15 meters (about 50 feet). The exposures were taken through a Pancam filter selecting wavelengths centered on 750 nanometers. Victoria Crater is about five times wider than 'Endurance Crater,' which Opportunity spent six months examining in 2004, and about 40 times wider than 'Eagle Crater,' where Opportunity first landed. The great lure of Victoria is the expectation that a thick stack of geological layers will be exposed in the crater walls, potentially several times the thickness that was previously studied at Endurance and therefore, potentially preserving several times the historical record.

  13. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Gagliano, L.; Bryan, T.; MacLeod, T.

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASAs Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  14. Developing stereo image based robot control system

    Energy Technology Data Exchange (ETDEWEB)

    Suprijadi,; Pambudi, I. R.; Woran, M.; Naa, C. F; Srigutomo, W. [Department of Physics, FMIPA, InstitutTeknologi Bandung Jl. Ganesha No. 10. Bandung 40132, Indonesia supri@fi.itb.ac.id (Indonesia)

    2015-04-16

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  15. SAIL--stereo-array isotope labeling.

    Science.gov (United States)

    Kainosho, Masatsune; Güntert, Peter

    2009-11-01

    Optimal stereospecific and regiospecific labeling of proteins with stable isotopes enhances the nuclear magnetic resonance (NMR) method for the determination of the three-dimensional protein structures in solution. Stereo-array isotope labeling (SAIL) offers sharpened lines, spectral simplification without loss of information and the ability to rapidly collect and automatically evaluate the structural restraints required to solve a high-quality solution structure for proteins up to twice as large as before. This review gives an overview of stable isotope labeling methods for NMR spectroscopy with proteins and provides an in-depth treatment of the SAIL technology.

  16. Binocular neurons in parastriate cortex: interocular 'matching' of receptive field properties, eye dominance and strength of silent suppression.

    Directory of Open Access Journals (Sweden)

    Phillip A Romo

    Full Text Available Spike-responses of single binocular neurons were recorded from a distinct part of primary visual cortex, the parastriate cortex (cytoarchitectonic area 18 of anaesthetized and immobilized domestic cats. Functional identification of neurons was based on the ratios of phase-variant (F1 component to the mean firing rate (F0 of their spike-responses to optimized (orientation, direction, spatial and temporal frequencies and size sine-wave-luminance-modulated drifting grating patches presented separately via each eye. In over 95% of neurons, the interocular differences in the phase-sensitivities (differences in F1/F0 spike-response ratios were small (≤ 0.3 and in over 80% of neurons, the interocular differences in preferred orientations were ≤ 10°. The interocular correlations of the direction selectivity indices and optimal spatial frequencies, like those of the phase sensitivies and optimal orientations, were also strong (coefficients of correlation r ≥ 0.7005. By contrast, the interocular correlations of the optimal temporal frequencies, the diameters of summation areas of the excitatory responses and suppression indices were weak (coefficients of correlation r ≤ 0.4585. In cells with high eye dominance indices (HEDI cells, the mean magnitudes of suppressions evoked by stimulation of silent, extra-classical receptive fields via the non-dominant eyes, were significantly greater than those when the stimuli were presented via the dominant eyes. We argue that the well documented 'eye-origin specific' segregation of the lateral geniculate inputs underpinning distinct eye dominance columns in primary visual cortices of mammals with frontally positioned eyes (distinct eye dominance columns, combined with significant interocular differences in the strength of silent suppressive fields, putatively contribute to binocular stereoscopic vision.

  17. Binocular Fusion and Invariant Category Learning due to Predictive Remapping during Scanning of a Depthful Scene with Eye Movements

    Directory of Open Access Journals (Sweden)

    Stephen eGrossberg

    2015-01-01

    Full Text Available How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object’s surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.

  18. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

    Science.gov (United States)

    Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash

    2015-01-01

    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations. PMID:25642198

  19. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements.

    Science.gov (United States)

    Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash

    2014-01-01

    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.

  20. New advances in amblyopia therapy I: binocular therapies and pharmacologic augmentation.

    Science.gov (United States)

    Kraus, Courtney L; Culican, Susan M

    2018-05-18

    Amblyopia therapy options have traditionally been limited to penalisation of the non-amblyopic eye with either patching or pharmaceutical penalisation. Solid evidence, mostly from the Pediatric Eye Disease Investigator Group, has validated both number of hours a day of patching and days per week of atropine use. The use of glasses alone has also been established as a good first-line therapy for both anisometropic and strabismic amblyopia. Unfortunately, visual acuity equalisation or even improvement is not always attainable with these methods. Additionally, non-compliance with prescribed therapies contributes to treatment failures, with data supporting difficulty adhering to full treatment sessions. Interest in alternative therapies for amblyopia treatment has long been a topic of interest among researchers and clinicians alike. Incorporating new technology with an understanding of the biological basis of amblyopia has led to enthusiasm for binocular treatment of amblyopia. Early work on perceptual learning as well as more recent enthusiasm for iPad-based dichoptic training have each generated interesting and promising data for vision improvement in amblyopes. Use of pharmaceutical augmentation of traditional therapies has also been investigated. Several different drugs with unique mechanisms of action are thought to be able to neurosensitise the brain and enhance responsiveness to amblyopia therapy. No new treatment has emerged from currently available evidence as superior to the traditional therapies in common practice today. But ongoing investigation into the use of both new technology and the understanding of the neural basis of amblyopia promises alternate or perhaps better cures in the future. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  1. Visions and visioning in foresight activities

    DEFF Research Database (Denmark)

    Jørgensen, Michael Søgaard; Grosu, Dan

    2007-01-01

    The paper discusses the roles of visioning processes and visions in foresight activities and in societal discourses and changes parallel to or following foresight activities. The overall topic can be characterised as the dynamics and mechanisms that make visions and visioning processes work...... or not work. The theoretical part of the paper presents an actor-network theory approach to the analyses of visions and visioning processes, where the shaping of the visions and the visioning and what has made them work or not work is analysed. The empirical part is based on analyses of the roles of visions...... and visioning processes in a number of foresight processes from different societal contexts. The analyses have been carried out as part of the work in the COST A22 network on foresight. A vision is here understood as a description of a desirable or preferable future, compared to a scenario which is understood...

  2. Pleiades Visions

    Science.gov (United States)

    Whitehouse, M.

    2016-01-01

    Pleiades Visions (2012) is my new musical composition for organ that takes inspiration from traditional lore and music associated with the Pleiades (Seven Sisters) star cluster from Australian Aboriginal, Native American, and Native Hawaiian cultures. It is based on my doctoral dissertation research incorporating techniques from the fields of ethnomusicology and cultural astronomy; this research likely represents a new area of inquiry for both fields. This large-scale work employs the organ's vast sonic resources to evoke the majesty of the night sky and the expansive landscapes of the homelands of the above-mentioned peoples. Other important themes in Pleiades Visions are those of place, origins, cosmology, and the creation of the world.

  3. Optoelectronic vision

    Science.gov (United States)

    Ren, Chunye; Parel, Jean-Marie A.

    1993-06-01

    Scientists have searched every discipline to find effective methods of treating blindness, such as using aids based on conversion of the optical image, to auditory or tactile stimuli. However, the limited performance of such equipment and difficulties in training patients have seriously hampered practical applications. A great edification has been given by the discovery of Foerster (1929) and Krause & Schum (1931), who found that the electrical stimulation of the visual cortex evokes the perception of a small spot of light called `phosphene' in both blind and sighted subjects. According to this principle, it is possible to invite artificial vision by using stimulation with electrodes placed on the vision neural system, thereby developing a prosthesis for the blind that might be of value in reading and mobility. In fact, a number of investigators have already exploited this phenomena to produce a functional visual prosthesis, bringing about great advances in this area.

  4. Identification and location of catenary insulator in complex background based on machine vision

    Science.gov (United States)

    Yao, Xiaotong; Pan, Yingli; Liu, Li; Cheng, Xiao

    2018-04-01

    It is an important premise to locate insulator precisely for fault detection. Current location algorithms for insulator under catenary checking images are not accurate, a target recognition and localization method based on binocular vision combined with SURF features is proposed. First of all, because of the location of the insulator in complex environment, using SURF features to achieve the coarse positioning of target recognition; then Using binocular vision principle to calculate the 3D coordinates of the object which has been coarsely located, realization of target object recognition and fine location; Finally, Finally, the key is to preserve the 3D coordinate of the object's center of mass, transfer to the inspection robot to control the detection position of the robot. Experimental results demonstrate that the proposed method has better recognition efficiency and accuracy, can successfully identify the target and has a define application value.

  5. Lambda Vision

    Science.gov (United States)

    Czajkowski, Michael

    2014-06-01

    There is an explosion in the quantity and quality of IMINT data being captured in Intelligence Surveillance and Reconnaissance (ISR) today. While automated exploitation techniques involving computer vision are arriving, only a few architectures can manage both the storage and bandwidth of large volumes of IMINT data and also present results to analysts quickly. Lockheed Martin Advanced Technology Laboratories (ATL) has been actively researching in the area of applying Big Data cloud computing techniques to computer vision applications. This paper presents the results of this work in adopting a Lambda Architecture to process and disseminate IMINT data using computer vision algorithms. The approach embodies an end-to-end solution by processing IMINT data from sensors to serving information products quickly to analysts, independent of the size of the data. The solution lies in dividing up the architecture into a speed layer for low-latent processing and a batch layer for higher quality answers at the expense of time, but in a robust and fault-tolerant way. This approach was evaluated using a large corpus of IMINT data collected by a C-130 Shadow Harvest sensor over Afghanistan from 2010 through 2012. The evaluation data corpus included full motion video from both narrow and wide area field-of-views. The evaluation was done on a scaled-out cloud infrastructure that is similar in composition to those found in the Intelligence Community. The paper shows experimental results to prove the scalability of the architecture and precision of its results using a computer vision algorithm designed to identify man-made objects in sparse data terrain.

  6. LED-based Photometric Stereo: Modeling, Calibration and Numerical Solutions

    DEFF Research Database (Denmark)

    Quéau, Yvain; Durix, Bastien; Wu, Tao

    2018-01-01

    We conduct a thorough study of photometric stereo under nearby point light source illumination, from modeling to numerical solution, through calibration. In the classical formulation of photometric stereo, the luminous fluxes are assumed to be directional, which is very difficult to achieve in pr...

  7. Acquisition of stereo panoramas for display in VR environments

    KAUST Repository

    Ainsworth, Richard A.; Sandin, Daniel J.; Schulze, Jurgen P.; Prudhomme, Andrew; DeFanti, Thomas A.; Srinivasan, Madhusudhanan

    2011-01-01

    photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented

  8. Lossless Compression of Stereo Disparity Maps for 3D

    DEFF Research Database (Denmark)

    Zamarin, Marco; Forchhammer, Søren

    2012-01-01

    Efficient compression of disparity data is important for accurate view synthesis purposes in multi-view communication systems based on the “texture plus depth” format, including the stereo case. In this paper a novel technique for lossless compression of stereo disparity images is presented...

  9. Vector disparity sensor with vergence control for active vision systems.

    Science.gov (United States)

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  10. Virtual Vision

    Science.gov (United States)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  11. Opportunity's Surroundings on Sol 1818 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11846 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11846 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,818th Martian day, or sol, of Opportunity's surface mission (March 5, 2009). South is at the center; north at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 80.3 meters (263 feet) southward earlier on that sol. Tracks from the drive recede northward in this view. The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  12. Opportunity's Surroundings After Sol 1820 Drive (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11841 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11841 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,820th to 1,822nd Martian days, or sols, of Opportunity's surface mission (March 7 to 9, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 20.6 meters toward the northwest on Sol 1820 before beginning to take the frames in this view. Tracks from that drive recede southwestward. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and small exposures of lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  13. Explaining Polarization Reversals in STEREO Wave Data

    Science.gov (United States)

    Breneman, A.; Cattell, C.; Wygant, J.; Kersten, K.; Wilson, L, B., III; Dai, L.; Colpitts, C.; Kellogg, P. J.; Goetz, K.; Paradise, A.

    2012-01-01

    Recently Breneman et al. reported observations of large amplitude lightning and transmitter whistler mode waves from two STEREO passes through the inner radiation belt (Lpaper. We show, with a combination of observations and simulated wave superposition, that these polarization reversals are due to the beating of an incident electromagnetic whistler mode wave at 21.4 kHz and linearly polarized, symmetric lower hybrid sidebands Doppler-shifted from the incident wave by +/-200 Hz. The existence of the lower hybrid waves is consistent with the parametric decay mechanism of Lee and Kuo whereby an incident whistler mode wave decays into symmetric, short wavelength lower hybrid waves and a purely growing (zero-frequency) mode. Like the lower hybrid waves, the purely growing mode is Doppler-shifted by 200 Hz as observed on STEREO. This decay mechanism in the upper ionosphere has been previously reported at equatorial latitudes and is thought to have a direct connection with explosive spread F enhancements. As such it may represent another dissipation mechanism of VLF wave energy in the ionosphere and may help to explain a deficit of observed lightning and transmitter energy in the inner radiation belts as reported by Starks et al.

  14. Stereo matching using epipolar distance transform.

    Science.gov (United States)

    Yang, Qingxiong; Ahuja, Narendra

    2012-10-01

    In this paper, we propose a simple but effective image transform, called the epipolar distance transform, for matching low-texture regions. It converts image intensity values to a relative location inside a planar segment along the epipolar line, such that pixels in the low-texture regions become distinguishable. We theoretically prove that the transform is affine invariant, thus the transformed images can be directly used for stereo matching. Any existing stereo algorithms can be directly used with the transformed images to improve reconstruction accuracy for low-texture regions. Results on real indoor and outdoor images demonstrate the effectiveness of the proposed transform for matching low-texture regions, keypoint detection, and description for low-texture scenes. Our experimental results on Middlebury images also demonstrate the robustness of our transform for highly textured scenes. The proposed transform has a great advantage, its low computational complexity. It was tested on a MacBook Air laptop computer with a 1.8 GHz Core i7 processor, with a speed of about 9 frames per second for a video graphics array-sized image.

  15. Image matching for digital close-range stereo photogrammetry based on constraints of Delaunay triangulated network and epipolar-line

    Science.gov (United States)

    Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.

    2006-10-01

    In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.

  16. Rapid, high-accuracy detection of strabismus and amblyopia using the pediatric vision scanner

    OpenAIRE

    Loudon, Sjoukje; Rook, Caitlin; Nassif, Deborah; Piskun, Nadya; Hunter, David

    2011-01-01

    textabstractPurpose. The Pediatric Vision Scanner (PVS) detects strabismus by identifying ocular fixation in both eyes simultaneously. This study was undertaken to assess the ability of the PVS to identify patients with amblyopia or strabismus, particularly anisometropic amblyopia with no measurable strabismus. Methods. The PVS test, administered from 40 cm and requiring 2.5 seconds of attention, generated a binocularity score (BIN, 0%-100%). We tested 154 patients and 48 controls between the...

  17. Tunnel Vision Prismatic Field Expansion: Challenges and Requirements.

    Science.gov (United States)

    Apfelbaum, Henry; Peli, Eli

    2015-12-01

    No prismatic solution for peripheral field loss (PFL) has gained widespread acceptance. Field extended by prisms has a corresponding optical scotoma at the prism apices. True expansion can be achieved when each eye is given a different view (through visual confusion). We analyze the effects of apical scotomas and binocular visual confusion in different designs to identify constraints on any solution that is likely to meet acceptance. Calculated perimetry diagrams were compared to perimetry with PFL patients wearing InWave channel prisms and Trifield spectacles. Percept diagrams illustrate the binocular visual confusion. Channel prisms provide no benefit at primary gaze. Inconsequential extension was provided by InWave prisms, although accessible with moderate gaze shifts. Higher-power prisms provide greater extension, with greater paracentral scotoma loss, but require uncomfortable gaze shifts. Head turns, not eye scans, are needed to see regions lost to the apical scotomas. Trifield prisms provide field expansion at all gaze positions, but acceptance was limited by disturbing effects of central binocular visual confusion. Field expansion when at primary gaze (where most time is spent) is needed while still providing unobstructed central vision. Paracentral multiplexing prisms we are developing that superimpose shifted and see-through views may accomplish that. Use of the analyses and diagramming techniques presented here will be of value when considering prismatic aids for PFL, and could have prevented many unsuccessful designs and the improbable reports we cited from the literature. New designs must likely address the challenges identified here.

  18. Grasping Unknown Objects in an Early Cognitive Vision System

    DEFF Research Database (Denmark)

    Popovic, Mila

    2011-01-01

    Grasping of unknown objects presents an important and challenging part of robot manipulation. The growing area of service robotics depends upon the ability of robots to autonomously grasp and manipulate a wide range of objects in everyday environments. Simple, non task-specific grasps of unknown ...... and comparing vision-based grasping methods, and the creation of algorithms for bootstrapping a process of acquiring world understanding for artificial cognitive agents....... presents a system for robotic grasping of unknown objects us- ing stereo vision. Grasps are defined based on contour and surface information provided by the Early Cognitive Vision System, that organizes visual informa- tion into a biologically motivated hierarchical representation. The contributions...... of the thesis are: the extension of the Early Cognitive Vision representation with a new type of feature hierarchy in the texture domain, the definition and evaluation of contour based grasping methods, the definition and evaluation of surface based grasping methods, the definition of a benchmark for testing...

  19. Tratamiento binocular de la ambliopía basado en la realidad virtual

    Directory of Open Access Journals (Sweden)

    Yanet Cristina Díaz Núñez

    Full Text Available Aunque los tratamientos predominantes de la ambliopía son monoculares, estos tienen poca aceptación y baja efectividad en el restablecimiento de la combinación binocular. Numerosas evidencias apoyan la idea de que la ambliopía es en esencia un problema binocular y que la supresión juega un papel clave. En esta revisión se exponen dos estrategias para el tratamiento binocular de la ambliopía basado en la realidad virtual; la primera con el objetivo primario de mejorar la agudeza visual y la segunda con el propósito de mejorar las funciones binoculares a través de la reducción de la supresión. Este enfoque binocular expone al paciente a condiciones artificiales de visión con estímulos dicópticos en imágenes relacionadas. Los estudios clínicos realizados, tanto en niños como adultos, reportan mejorías de la agudeza visual y la estereopsia en un tiempo muy inferior al requerido por la oclusión. Los resultados clínicos sugieren que un enfoque binocular que combine ambas estrategias puede utilizarse como complemento de los tratamientos clásicos y como alternativa en adultos y niños con historial de tratamientos fracasados o rechazados.

  20. Psilocybin links binocular rivalry switch rate to attention and subjective arousal levels in humans.

    Science.gov (United States)

    Carter, Olivia L; Hasler, Felix; Pettigrew, John D; Wallis, Guy M; Liu, Guang B; Vollenweider, Franz X

    2007-12-01

    Binocular rivalry occurs when different images are simultaneously presented to each eye. During continual viewing of this stimulus, the observer will experience repeated switches between visual awareness of the two images. Previous studies have suggested that a slow rate of perceptual switching may be associated with clinical and drug-induced psychosis. The objective of the study was to explore the proposed relationship between binocular rivalry switch rate and subjective changes in psychological state associated with 5-HT2A receptor activation. This study used psilocybin, the hallucinogen found naturally in Psilocybe mushrooms that had previously been found to induce psychosis-like symptoms via the 5-HT2A receptor. The effects of psilocybin (215 microg/kg) were considered alone and after pretreatment with the selective 5-HT2A antagonist ketanserin (50 mg) in ten healthy human subjects. Psilocybin significantly reduced the rate of binocular rivalry switching and increased the proportion of transitional/mixed percept experience. Pretreatment with ketanserin blocked the majority of psilocybin's "positive" psychosis-like hallucinogenic symptoms. However, ketanserin had no influence on either the psilocybin-induced slowing of binocular rivalry or the drug's "negative-type symptoms" associated with reduced arousal and vigilance. Together, these findings link changes in binocular rivalry switching rate to subjective levels of arousal and attention. In addition, it suggests that psilocybin's effect on binocular rivalry is unlikely to be mediated by the 5-HT2A receptor.

  1. An overview of instrumentation for the Large Binocular Telescope

    Science.gov (United States)

    Wagner, R. Mark

    2012-09-01

    An overview of instrumentation for the Large Binocular Telescope (LBT) is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27' x 27') mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the left and right direct F/15 Gregorian foci incorporating multiple slit masks for multi-object spectroscopy over a 6' field and spectral resolutions of up to 2000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCI), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at the left and right front bent F/15 Gregorian foci and designed for seeing-limited (FOV: 4' × 4') imaging, long-slit spectroscopy, and multiobject spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0'.5 × 0'.5) imaging and long-slit spectroscopy. Strategic instruments under development that can utilize the full 23-m baseline of the LBT include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). LBTI is currently undergoing commissioning on the LBT and utilizing the installed adaptive secondary mirrors in both single- sided and two-sided beam combination modes. In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. Over the past four years the LBC pair, LUCI1, and MODS1 have been commissioned and are now scheduled for routine partner science observations. The delivery of both LUCI2 and MODS2 is anticipated before the end of 2012. The

  2. Spirit Near 'Stapledon' on Sol 1802 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11781 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11781 NASA Mars Exploration Rover Spirit used its navigation camera for the images assembled into this stereo, full-circle view of the rover's surroundings during the 1,802nd Martian day, or sol, (January 26, 2009) of Spirit's mission on the surface of Mars. South is at the center; north is at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Spirit had driven down off the low plateau called 'Home Plate' on Sol 1782 (January 6, 2009) after spending 12 months on a north-facing slope on the northern edge of Home Plate. The position on the slope (at about the 9-o'clock position in this view) tilted Spirit's solar panels toward the sun, enabling the rover to generate enough electricity to survive its third Martian winter. Tracks at about the 11-o'clock position of this panorama can be seen leading back to that 'Winter Haven 3' site from the Sol 1802 position about 10 meters (33 feet) away. For scale, the distance between the parallel wheel tracks is about one meter (40 inches). Where the receding tracks bend to the left, a circular pattern resulted from Spirit turning in place at a soil target informally named 'Stapledon' after William Olaf Stapledon, a British philosopher and science-fiction author who lived from 1886 to 1950. Scientists on the rover team suspected that the soil in that area might have a high concentration of silica, resembling a high-silica soil patch discovered east of Home Plate in 2007. Bright material visible in the track furthest to the right was examined with Spirit's alpha partical X-ray spectrometer and found, indeed, to be rich in silica. The team laid plans to drive Spirit from this Sol 1802 location back up

  3. Implementation of an ISIS Compatible Stereo Processing Chain for 3D Stereo Reconstruction

    Science.gov (United States)

    Tasdelen, E.; Unbekannt, H.; Willner, K.; Oberst, J.

    2012-09-01

    The department for Planetary Geodesy at TU Berlin is developing routines for photogrammetric processing of planetary image data to derive 3D representations of planetary surfaces. The ISIS software, developed by USGS, Flagstaff, is readily available, open source, and very well documented. Hence, ISIS [1] was chosen as a prime processing platform and tool kit. However, ISIS does not provide a full photogrammetric stereo processing chain. Several components like image matching, bundle block adjustment (until recently) or digital terrain model (DTM) interpolation from 3D object points are missing. Our group aims to complete this photogrammetric stereo processing chain by implementing the missing components, taking advantage of already existing ISIS classes and functionality. With this abstract we would like to report on the development of a new image matching software that is optimized for both orbital and closeranged planetary images and compatible with ISIS formats and routines and an interpolation tool that is developed to create DTMs from large 3-D point clouds.

  4. Eyesight quality and Computer Vision Syndrome.

    Science.gov (United States)

    Bogdănici, Camelia Margareta; Săndulache, Diana Elena; Nechita, Corina Andreea

    2017-01-01

    The aim of the study was to analyze the effects that gadgets have on eyesight quality. A prospective observational study was conducted from January to July 2016, on 60 people who were divided into two groups: Group 1 - 30 middle school pupils with a mean age of 11.9 ± 1.86 and Group 2 - 30 patients evaluated in the Ophthalmology Clinic, "Sf. Spiridon" Hospital, Iași, with a mean age of 21.36 ± 7.16 years. The clinical parameters observed were the following: visual acuity (VA), objective refraction, binocular vision (BV), fusional amplitude (FA), Schirmer's test. A questionnaire was also distributed, which contained 8 questions that highlighted the gadget's impact on the eyesight. The use of different gadgets, such as computer, laptops, mobile phones or other displays become part of our everyday life and people experience a variety of ocular symptoms or vision problems related to these. Computer Vision Syndrome (CVS) represents a group of visual and extraocular symptoms associated with sustained use of visual display terminals. Headache, blurred vision, and ocular congestion are the most frequent manifestations determined by the long time use of gadgets. Mobile phones and laptops are the most frequently used gadgets. People who use gadgets for a long time have a sustained effort for accommodation. A small amount of refractive errors (especially myopic shift) was objectively recorded by various studies on near work. Dry eye syndrome could also be identified, and an improvement of visual comfort could be observed after the instillation of artificial tears drops. Computer Vision Syndrome is still under-diagnosed, and people should be made aware of the bad effects the prolonged use of gadgets has on eyesight.

  5. STRESS - STEREO TRansiting Exoplanet and Stellar Survey

    Science.gov (United States)

    Sangaralingam, Vinothini; Stevens, Ian R.; Spreckley, Steve; Debosscher, Jonas

    2010-02-01

    The Heliospheric Imager (HI) instruments on board the two STEREO (Solar TErrestrial RElations Observatory) spacecraft provides an excellent opportunity for space based stellar photometry. The HI instruments provide a wide area coverage (20° × 20° for the two HI-1 instruments and 70° × 70° for the two HI-2 instruments) and long continuous periods of observations (20 days and 70 days respectively). Using HI-1A which has a pass band of 6500Å to 7500Å and a cadence of 40 minutes, we have gathered photometric information for more than a million stars brighter than 12th magnitude for a period of two years. Here we present some early results from this study on a range of variable stars and the future prospects for the data.

  6. Robust photometric stereo using structural light sources

    Science.gov (United States)

    Han, Tian-Qi; Cheng, Yue; Shen, Hui-Liang; Du, Xin

    2014-05-01

    We propose a robust photometric stereo method by using structural arrangement of light sources. In the arrangement, light sources are positioned on a planar grid and form a set of collinear combinations. The shadow pixels are detected by adaptive thresholding. The specular highlight and diffuse pixels are distinguished according to their intensity deviations of the collinear combinations, thanks to the special arrangement of light sources. The highlight detection problem is cast as a pattern classification problem and is solved using support vector machine classifiers. Considering the possible misclassification of highlight pixels, the ℓ1 regularization is further employed in normal map estimation. Experimental results on both synthetic and real-world scenes verify that the proposed method can robustly recover the surface normal maps in the case of heavy specular reflection and outperforms the state-of-the-art techniques.

  7. Binocular versus standard occlusion or blurring treatment for unilateral amblyopia in children aged three to eight years.

    Science.gov (United States)

    Tailor, Vijay; Bossi, Manuela; Bunce, Catey; Greenwood, John A; Dahlmann-Noor, Annegret

    2015-08-11

    Current treatments for amblyopia in children, occlusion and pharmacological blurring, have had limited success, with less than two-thirds of children achieving good visual acuity of at least 0.20 logMAR in the amblyopic eye, limited improvement of stereopsis, and poor compliance. A new treatment approach, based on the dichoptic presentation of movies or computer games (images presented separately to each eye), may yield better results, as it aims to balance the input of visual information from each eye to the brain. Compliance may also improve with these more child-friendly treatment procedures. To determine whether binocular treatments in children aged three to eight years with unilateral amblyopia result in better visual outcomes than conventional occlusion or pharmacological blurring treatment. We searched the Cochrane Eyes and Vision Group Trials Register (last date of searches: 14 April 2015), the Cochrane Central Register of Controlled Trials (CENTRAL; 2015, Issue 3), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to April 2015), EMBASE (January 1980 to April 2015), the ISRCTN registry (www.isrctn.com/editAdvancedSearch), ClinicalTrials.gov (www.clinicaltrials.gov), and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. Two review authors independently screened the results of the search in order to identify studies that met the inclusion criteria of the review: randomised controlled trials (RCTs) that enrolled participants between the ages of three and eight years old with unilateral amblyopia, defined as best-corrected visual acuity (BCVA) worse than 0.200 logMAR in the amblyopic eye, and BCVA 0.200 logMAR or better in the fellow eye, in the presence of an amblyogenic risk factor such as anisometropia, strabismus, or both. Prior

  8. A buyer's and user's guide to astronomical telescopes and binoculars

    CERN Document Server

    Mullaney, James

    2014-01-01

    Amateur astronomers of all skill levels are always contemplating their next telescope, and this book points the way to the most suitable instruments. Similarly, those who are buying their first telescopes – and these days not necessarily a low-cost one – will be able to compare and contrast different types and manufacturers. This revised new guide provides an extensive overview of binoculars and telescopes. It includes detailed up-to-date information on sources, selection and use of virtually every major type, brand, and model on today’s market, a truly invaluable treasure-trove of information and helpful advice for all amateur astronomers. Originally written in 2006, much of the first edition is inevitably now out of date, as equipment advances and manufacturers come and go. This second edition not only updates all the existing sections but adds two new ones: Astro-imaging and Professional-Amateur collaboration. Thanks to the rapid and amazing developments that have been made in digital cameras it is...

  9. Early laser operations at the Large Binocular Telescope Observatory

    Science.gov (United States)

    Rahmer, Gustavo; Lefebvre, Michael; Christou, Julian; Raab, Walfried; Rabien, Sebastian; Ziegleder, Julian; Borelli, José L.; Gässler, Wolfgang

    2014-08-01

    ARGOS is the GLAO (Ground-Layer Adaptive Optics) Rayleigh-based LGS (Laser Guide Star) facility for the Large Binocular Telescope Observatory (LBTO). It is dedicated for observations with LUCI1 and LUCI2, LBTO's pair of NIR imagers and multi-object spectrographs. The system projects three laser beams from the back of each of the two secondary mirror units, which create two constellations circumscribed on circles of 2 arcmin radius with 120 degree spacing. Each of the six Nd:YAG lasers provides a beam of green (532nm) pulses at a rate of 10kHz with a power of 14W to 18W. We achieved first on-sky propagation on the night of November 5, 2013, and commissioning of the full system will take place during 2014. We present the initial results of laser operations at the observatory, including safety procedures and the required coordination with external agencies (FAA, Space Command, and Military Airspace Manager). We also describe our operational procedures and report on our experiences with aircraft spotters. Future plans for safer and more efficient aircraft monitoring and detection are discussed.

  10. EXO-ZODI MODELING FOR THE LARGE BINOCULAR TELESCOPE INTERFEROMETER

    Energy Technology Data Exchange (ETDEWEB)

    Kennedy, Grant M.; Wyatt, Mark C.; Panić, Olja; Shannon, Andrew [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Bailey, Vanessa; Defrère, Denis; Hinz, Philip M.; Rieke, George H.; Skemer, Andrew J.; Su, Katherine Y. L. [Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States); Bryden, Geoffrey; Mennesson, Bertrand; Morales, Farisa; Serabyn, Eugene [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Danchi, William C.; Roberge, Aki; Stapelfeldt, Karl R. [NASA Goddard Space Flight Center, Exoplanets and Stellar Astrophysics, Code 667, Greenbelt, MD 20771 (United States); Haniff, Chris [Cavendish Laboratory, University of Cambridge, JJ Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Lebreton, Jérémy [Infrared Processing and Analysis Center, MS 100-22, California Institute of Technology, 770 South Wilson Avenue, Pasadena, CA 91125 (United States); Millan-Gabet, Rafael [NASA Exoplanet Science Institute, California Institute of Technology, 770 South Wilson Avenue, Pasadena, CA 91125 (United States); and others

    2015-02-01

    Habitable zone dust levels are a key unknown that must be understood to ensure the success of future space missions to image Earth analogs around nearby stars. Current detection limits are several orders of magnitude above the level of the solar system's zodiacal cloud, so characterization of the brightness distribution of exo-zodi down to much fainter levels is needed. To this end, the Large Binocular Telescope Interferometer (LBTI) will detect thermal emission from habitable zone exo-zodi a few times brighter than solar system levels. Here we present a modeling framework for interpreting LBTI observations, which yields dust levels from detections and upper limits that are then converted into predictions and upper limits for the scattered light surface brightness. We apply this model to the HOSTS survey sample of nearby stars; assuming a null depth uncertainty of 10{sup –4} the LBTI will be sensitive to dust a few times above the solar system level around Sun-like stars, and to even lower dust levels for more massive stars.

  11. Sterile neutrino search in the STEREO experiment

    Energy Technology Data Exchange (ETDEWEB)

    Buck, Christian; Lindner, Manfred; Roca, Christian [MPIK (Germany)

    2016-07-01

    In neutrino oscillations, a canonical understanding has been established during the last decades after the measurement of the mixing angles θ{sub 12}, θ{sub 23}, θ{sub 13} via solar, atmospheric and, most recently, reactor neutrinos. However, the re-evaluation of the reactor neutrino theoretical flux has forced a re-analysis of most reactor neutrino measurements at short distances. This has led to an unexpected experimental deficit of neutrinos with respect to the theory that needs to be accommodated, commonly known as the ''reactor neutrino anomaly''. This deficit can be interpreted as the existence of a light sterile neutrino state into which reactor neutrinos oscillate at very short distances. The STEREO experiment aims to find an evidence of such oscillations. The ILL research reactor in Grenoble (France) operates at a power of 58MW and provides a large flux of electron antineutrinos with an energy range of a few MeV. These neutrinos will be detected in a 2000 liter organic liquid scintillator detector doped with Gadolinium and consisting of 6 cells stacked along the direction of the core. Given the proximity of the detector, neutrinos will only travel a few meters until they interact with the scintillator. The detector will be placed about 10 m from the reactor core, allowing STEREO to be sensitive to oscillations into the above mentioned neutrino sterile state. The project presents a high potential for a discovery that would impact deeply the paradigms of neutrino oscillations and in consequence the current understanding of particle physics and cosmology.

  12. Low Vision FAQs

    Science.gov (United States)

    ... de los Ojos Cómo hablarle a su oculista Low Vision FAQs What is low vision? Low vision is a visual impairment, not correctable ... person’s ability to perform everyday activities. What causes low vision? Low vision can result from a variety of ...

  13. Pediatric Low Vision

    Science.gov (United States)

    ... Asked Questions Español Condiciones Chinese Conditions Pediatric Low Vision What is Low Vision? Partial vision loss that cannot be corrected causes ... and play. What are the signs of Low Vision? Some signs of low vision include difficulty recognizing ...

  14. Method for Stereo Mapping Based on Objectarx and Pipeline Technology

    Science.gov (United States)

    Liu, F.; Chen, T.; Lin, Z.; Yang, Y.

    2012-07-01

    Stereo mapping is an important way to acquire 4D production. Based on the development of the stereo mapping and the characteristics of ObjectARX and pipeline technology, a new stereo mapping scheme which can realize the interaction between the AutoCAD and digital photogrammetry system is offered by ObjectARX and pipeline technology. An experiment is made in order to make sure the feasibility with the example of the software MAP-AT (Modern Aerial Photogrammetry Automatic Triangulation), the experimental results show that this scheme is feasible and it has very important meaning for the realization of the acquisition and edit integration.

  15. Trifocal intraocular lenses: a comparison of the visual performance and quality of vision provided by two different lens designs

    Directory of Open Access Journals (Sweden)

    Gundersen KG

    2017-06-01

    Full Text Available Kjell G Gundersen,1 Rick Potvin2 1IFocus Øyeklinikk AS, Haugesund, Norway; 2Science in Vision, Akron, NY, USA Purpose: To compare two different diffractive trifocal intraocular lens (IOL designs, evaluating longer-term refractive outcomes, visual acuity (VA at various distances, low contrast VA and quality of vision.Patients and methods: Patients with binocularly implanted trifocal IOLs of two different designs (FineVision [FV] and Panoptix [PX] were evaluated 6 months to 2 years after surgery. Best distance-corrected and uncorrected VA were tested at distance (4 m, intermediate (80 and 60 cm and near (40 cm. A binocular defocus curve was collected with the subject’s best distance correction in place. The preferred reading distance was determined along with the VA at that distance. Low contrast VA at distance was also measured. Quality of vision was measured with the National Eye Institute Visual Function Questionnaire near subset and the Quality of Vision questionnaire.Results: Thirty subjects in each group were successfully recruited. The binocular defocus curves differed only at vergences of −1.0 D (FV better, P=0.02, −1.5 and −2.00 D (PX better, P<0.01 for both. Best distance-corrected and uncorrected binocular vision were significantly better for the PX lens at 60 cm (P<0.01 with no significant differences at other distances. The preferred reading distance was between 42 and 43 cm for both lenses, with the VA at the preferred reading distance slightly better with the PX lens (P=0.04. There were no statistically significant differences by lens for low contrast VA (P=0.1 or for quality of vision measures (P>0.3.Conclusion: Both trifocal lenses provided excellent distance, intermediate and near vision, but several measures indicated that the PX lens provided better intermediate vision at 60 cm. This may be important to users of tablets and other handheld devices. Quality of vision appeared similar between the two lens designs

  16. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Pablo Ramon Soria

    2017-01-01

    Full Text Available The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors.

  17. Vision Screening

    Science.gov (United States)

    1993-01-01

    The Visi Screen OSS-C, marketed by Vision Research Corporation, incorporates image processing technology originally developed by Marshall Space Flight Center. Its advantage in eye screening is speed. Because it requires no response from a subject, it can be used to detect eye problems in very young children. An electronic flash from a 35 millimeter camera sends light into a child's eyes, which is reflected back to the camera lens. The photorefractor then analyzes the retinal reflexes generated and produces an image of the child's eyes, which enables a trained observer to identify any defects. The device is used by pediatricians, day care centers and civic organizations that concentrate on children with special needs.

  18. Robot vision

    International Nuclear Information System (INIS)

    Hall, E.L.

    1984-01-01

    Almost all industrial robots use internal sensors such as shaft encoders which measure rotary position, or tachometers which measure velocity, to control their motions. Most controllers also provide interface capabilities so that signals from conveyors, machine tools, and the robot itself may be used to accomplish a task. However, advanced external sensors, such as visual sensors, can provide a much greater degree of adaptability for robot control as well as add automatic inspection capabilities to the industrial robot. Visual and other sensors are now being used in fundamental operations such as material processing with immediate inspection, material handling with adaption, arc welding, and complex assembly tasks. A new industry of robot vision has emerged. The application of these systems is an area of great potential

  19. Experimental Vision Studies of Flow and Structural Effects on Wind Turbines

    DEFF Research Database (Denmark)

    Najafi, Nadia

    In the present thesis, two modern vision technologies are developed and used to study wind turbines: 1- Stereo vision to study vibrations and dynamics of the Vertical Axes Wind Turbine (VAWT) via operational modal analysis (OMA) 2- Background-oriented Schlieren (BOS) method to study the tip...... vortices that are shed from a Horizontal Axis Wind Turbine (HAWT) blades The thesis starts with an introduction to the stereo vision and OMA and is followed by two practical implementations of the basics derived in the introduction. In the first experiment, we developed the image processing tools...... a Nordtank horizontal axis wind turbine based on the density gradient in the vortex. The BOS method does not need complicated equipment such as special cameras or seeded flow, which makes it a convenient method to study large scale flows. However, the challenging part in the current case is the small...

  20. Color vision test

    Science.gov (United States)

    ... present from birth) color vision problems: Achromatopsia -- complete color blindness , seeing only shades of gray Deuteranopia -- difficulty telling ... Vision test - color; Ishihara color vision test Images Color blindness tests References Bowling B. Hereditary fundus dystrophies. In: ...

  1. Impairments to Vision

    Science.gov (United States)

    ... an external Non-Government web site. Impairments to Vision Normal Vision Diabetic Retinopathy Age-related Macular Degeneration In this ... pictures, fixate on the nose to simulate the vision loss. In diabetic retinopathy, the blood vessels in ...

  2. Dynamic Shape Capture of Free-Swimming Aquatic Life using Multi-view Stereo

    Science.gov (United States)

    Daily, David

    2017-11-01

    The reconstruction and tracking of swimming fish in the past has either been restricted to flumes, small volumes, or sparse point tracking in large tanks. The purpose of this research is to use an array of cameras to automatically track 50-100 points on the surface of a fish using the multi-view stereo computer vision technique. The method is non-invasive thus allowing the fish to swim freely in a large volume and to perform more advanced maneuvers such as rolling, darting, stopping, and reversing which have not been studied. The techniques for obtaining and processing the 3D kinematics and maneuvers of tuna, sharks, stingrays, and other species will be presented and compared. The National Aquarium and the Naval Undersea Warfare Center and.

  3. A mixed reality approach for stereo-tomographic quantification of lung nodules.

    Science.gov (United States)

    Chen, Mianyi; Kalra, Mannudeep K; Yun, Wenbing; Cong, Wenxiang; Yang, Qingsong; Nguyen, Terry; Wei, Biao; Wang, Ge

    2016-05-25

    To reduce the radiation dose and the equipment cost associated with lung CT screening, in this paper we propose a mixed reality based nodule measurement method with an active shutter stereo imaging system. Without involving hundreds of projection views and subsequent image reconstruction, we generated two projections of an iteratively placed ellipsoidal volume in the field of view and merging these synthetic projections with two original CT projections. We then demonstrated the feasibility of measuring the position and size of a nodule by observing whether projections of an ellipsoidal volume and the nodule are overlapped from a human observer's visual perception through the active shutter 3D vision glasses. The average errors of measured nodule parameters are less than 1 mm in the simulated experiment with 8 viewers. Hence, it could measure real nodules accurately in the experiments with physically measured projections.

  4. Comparison of visual outcomes after bilateral implantation of extended range of vision and trifocal intraocular lenses.

    Science.gov (United States)

    Ruiz-Mesa, Ramón; Abengózar-Vela, Antonio; Aramburu, Ana; Ruiz-Santos, María

    2017-06-26

    To compare visual outcomes after cataract surgery with bilateral implantation of 2 intraocular lenses (IOLs): extended range of vision and trifocal. Each group of this prospective study comprised 40 eyes (20 patients). Phacoemulsification followed by bilateral implantation of a FineVision IOL (group 1) or a Symfony IOL (group 2) was performed. The following outcomes were assessed up to 1 year postoperatively: binocular uncorrected distance visual acuity (UDVA), binocular uncorrected intermediate visual acuity (UIVA) at 60 cm, binocular uncorrected near visual acuity (UNVA) at 40 cm, spherical equivalent (SE) refraction, defocus curves, mesopic and photopic contrast sensitivity, halometry, posterior capsule opacification (PCO), and responses to a patient questionnaire. The mean binocular values in group 1 and group 2, respectively, were SE -0.15 ± 0.25 D and -0.19 ± 0.18 D; UDVA 0.01 ± 0.03 logMAR and 0.01 ± 0.02 logMAR; UIVA 0.11 ± 0.08 logMAR and 0.09 ± 0.08 logMAR; UNVA 0.06 ± 0.07 logMAR and 0.17 ± 0.06 logMAR. Difference in UNVA between IOLs (pvisual outcomes. The FineVision IOL showed better near visual acuity. Predictability of the refractive results and optical performance were excellent; all patients achieved spectacle independence. The 2 IOLs gave similar and good contrast sensitivity in photopic and mesopic conditions and low perception of halos by patients.

  5. Alternative confidence measure for local matching stereo algorithms

    CSIR Research Space (South Africa)

    Ndhlovu, T

    2009-11-01

    Full Text Available The authors present a confidence measure applied to individual disparity estimates in local matching stereo correspondence algorithms. It aims at identifying textureless areas, where most local matching algorithms fail. The confidence measure works...

  6. Teater (stereo)tüüpide loojana / Anneli Saro

    Index Scriptorium Estoniae

    Saro, Anneli, 1968-

    2006-01-01

    Tutvustatakse 27. märtsil Tartu Ülikooli ajaloomuuseumis toimuva Eesti Teatriuurijate Ühenduse ning TÜ teatriteaduse ja kirjandusteooria õppetooli korraldatud konverentsi "Teater sotsiaalsete ja kultuuriliste (stereo)tüüpide loojana" teemasid

  7. Infrared stereo calibration for unmanned ground vehicle navigation

    Science.gov (United States)

    Harguess, Josh; Strange, Shawn

    2014-06-01

    The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.

  8. Nonhuman Primate Studies to Advance Vision Science and Prevent Blindness.

    Science.gov (United States)

    Mustari, Michael J

    2017-12-01

    Most primate behavior is dependent on high acuity vision. Optimal visual performance in primates depends heavily upon frontally placed eyes, retinal specializations, and binocular vision. To see an object clearly its image must be placed on or near the fovea of each eye. The oculomotor system is responsible for maintaining precise eye alignment during fixation and generating eye movements to track moving targets. The visual system of nonhuman primates has a similar anatomical organization and functional capability to that of humans. This allows results obtained in nonhuman primates to be applied to humans. The visual and oculomotor systems of primates are immature at birth and sensitive to the quality of binocular visual and eye movement experience during the first months of life. Disruption of postnatal experience can lead to problems in eye alignment (strabismus), amblyopia, unsteady gaze (nystagmus), and defective eye movements. Recent studies in nonhuman primates have begun to discover the neural mechanisms associated with these conditions. In addition, genetic defects that target the retina can lead to blindness. A variety of approaches including gene therapy, stem cell treatment, neuroprosthetics, and optogenetics are currently being used to restore function associated with retinal diseases. Nonhuman primates often provide the best animal model for advancing fundamental knowledge and developing new treatments and cures for blinding diseases. © The Author(s) 2017. Published by Oxford University Press on behalf of the National Academy of Sciences. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  9. Validation of Pleiades Tri-Stereo DSM in Urban Areas

    Directory of Open Access Journals (Sweden)

    Emmanouil Panagiotakis

    2018-03-01

    Full Text Available We present an accurate digital surface model (DSM derived from high-resolution Pleiades-1B 0.5 m panchromatic tri-stereo images, covering an area of 400 km2 over the Athens Metropolitan Area. Remote sensing and photogrammetry tools were applied, resulting in a 1 m × 1 m posting DSM over the study area. The accuracy of the produced DSM was evaluated against measured elevations by a differential Global Positioning System (d-GPS and a reference DSM provided by the National Cadaster and Mapping Agency S.A. Different combinations of stereo and tri-stereo images were used and tested on the quality of the produced DSM. Results revealed that the DSM produced by the tri-stereo analysis has a root mean square error (RMSE of 1.17 m in elevation, which lies within the best reported in the literature. On the other hand, DSMs derived by standard analysis of stereo-pairs from the same sensor were found to perform worse. Line profile data showed similar patterns between the reference and produced DSM. Pleiades tri-stereo high-quality DSM products have the necessary accuracy to support applications in the domains of urban planning, including climate change mitigation and adaptation, hydrological modelling, and natural hazards, being an important input for simulation models and morphological analysis at local scales.

  10. Stereo side information generation in low-delay distributed stereo video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Forchhammer, Søren

    2012-01-01

    to create SI exploiting the inter-view spatial redundancy. A careful fusion of the two SI should be done in order to use the best part of each SI. In this work we study a Stereo Low-Delay scenario using only two views. Due to the delay constraint we use only past frames of the sequence we are decoding...... the two SIs, inspired by Multi-Hypothesis decoding. In this work the multiple hypotheses are used to fuse the SIs. Preliminary results show improvements up to 1 dB....

  11. STEREO PHOTO HYDROFEL, A PROCESS OF MAKING SAID STEREO PHOTO HYDROGEL, POLYMERS FOR USE IN MAKING SUCH HYDROGEL AND A PHARMACEUTICAL COMPRISING SAID POLYMERS

    NARCIS (Netherlands)

    Hiemstra, C.; Zhong, Zhiyuan; Feijen, Jan

    2008-01-01

    The Invention relates to a stereo photo hydrogel formed by stereo complexed and photo cross-linked polymers, which polymers comprise at least two types of polymers having at least one hydrophilic component, at least one hydrophobic mutually stereo complexing component, and at least one of the types

  12. Stereo and regioselectivity in ''Activated'' tritium reactions

    International Nuclear Information System (INIS)

    Ehrenkaufer, R.L.E.; Hembree, W.C.; Wolf, A.P.

    1988-01-01

    To investigate the stereo and positional selectivity of the microwave discharge activation (MDA) method, the tritium labeling of several amino acids was undertaken. The labeling of L-valine and the diastereomeric pair L-isoleucine and L-alloisoleucine showed less than statistical labeling at the α-amino C-H position mostly with retention of configuration. Labeling predominated at the single β C-H tertiary (methyne) position. The labeling of L-valine and L-proline with and without positive charge on the α-amino group resulted in large increases in specific activity (greater than 10-fold) when positive charge was removed by labeling them as their sodium carboxylate salts. Tritium NMR of L-proline labeled both as its zwitterion and sodium salt showed also large differences in the tritium distribution within the molecule. The distribution preferences in each of the charge states are suggestive of labeling by an electrophilic like tritium species(s). 16 refs., 5 tabs

  13. Enhanced operator perception through 3D vision and haptic feedback

    Science.gov (United States)

    Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren

    2012-06-01

    Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.

  14. Method used to test the imaging consistency of binocular camera's left-right optical system

    Science.gov (United States)

    Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui

    2016-09-01

    To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.

  15. What Is Low Vision?

    Science.gov (United States)

    ... Your Rights Training Resources Workplace Technology CareerConnect Stories Working as a Senior with Vision Loss For Seniors Age-Related Vision ... Changes Health and Aging Retirement Living Continuing to Work as a Senior with Vision Loss Get Connected About VisionAware Join ...

  16. Is there any evidence for the validity of diagnostic criteria used for accommodative and nonstrabismic binocular dysfunctions?

    Directory of Open Access Journals (Sweden)

    Pilar Cacho-Martínez

    2014-01-01

    Conclusions: Scientific literature reveals differences between authors according to diagnostic criteria for accommodative and nonstrabismic binocular dysfunctions. Diagnostic accuracy studies show that there is only certain evidence for accommodative conditions. For binocular anomalies there is only evidence about a validated questionnaire for convergence insufficiency with no data of diagnostic accuracy.

  17. Visión binocular : diagnóstico y tratamiento

    OpenAIRE

    Borràs García, M. Rosa

    1996-01-01

    Este libro está dirigido a todos los profesionales del campo de la optometría que quieran profundizar en la visión binocular. También está indicado para los alumnos de tercer curso de Optometría, tanto en asignaturas troncales como optativas. Sus contenidos están divididos en capítulos que pueden ser leídos de forma independiente, aunque es recomendable comprender el presente texto como una unidad. Su estructura abarca desde las disfunciones binoculares más frecuentes al estrabismo, la amblio...

  18. No-reference stereoscopic image quality measurement based on generalized local ternary patterns of binocular energy response

    International Nuclear Information System (INIS)

    Zhou, Wujie; Yu, Lu

    2015-01-01

    Perceptual no-reference (NR) quality measurement of stereoscopic images has become a challenging issue in three-dimensional (3D) imaging fields. In this article, we propose an efficient binocular quality-aware features extraction scheme, namely generalized local ternary patterns (GLTP) of binocular energy response, for general-purpose NR stereoscopic image quality measurement (SIQM). More specifically, we first construct the binocular energy response of a distorted stereoscopic image with different stimuli of amplitude and phase shifts. Then, the binocular quality-aware features are generated from the GLTP of the binocular energy response. Finally, these features are mapped to the subjective quality score of the distorted stereoscopic image by using support vector regression. Experiments on two publicly available 3D databases confirm the effectiveness of the proposed metric compared with the state-of-the-art full reference and NR metrics. (paper)

  19. Quantitative visual fields under binocular viewing conditions in primary and consecutive divergent strabismus

    NARCIS (Netherlands)

    Joosse, M. V.; Simonsz, H. J.; van Minderhout, E. M.; Mulder, P. G.; de Jong, P. T.

    1999-01-01

    Although there have been a number of studies on the size of the suppression scotoma in divergent strabismus, there have been no reports on the full extent (i.e. size as well as depth) of this scotoma. Binocular static perimetry was used to measure suppression scotomas in five patients with primary

  20. Human cortical neural correlates of visual fatigue during binocular depth perception: An fNIRS study.

    Directory of Open Access Journals (Sweden)

    Tingting Cai

    Full Text Available Functional near-infrared spectroscopy (fNIRS was adopted to investigate the cortical neural correlates of visual fatigue during binocular depth perception for different disparities (from 0.1° to 1.5°. By using a slow event-related paradigm, the oxyhaemoglobin (HbO responses to fused binocular stimuli presented by the random-dot stereogram (RDS were recorded over the whole visual dorsal area. To extract from an HbO curve the characteristics that are correlated with subjective experiences of stereopsis and visual fatigue, we proposed a novel method to fit the time-course HbO curve with various response functions which could reflect various processes of binocular depth perception. Our results indicate that the parietal-occipital cortices are spatially correlated with binocular depth perception and that the process of depth perception includes two steps, associated with generating and sustaining stereovision. Visual fatigue is caused mainly by generating stereovision, while the amplitude of the haemodynamic response corresponding to sustaining stereovision is correlated with stereopsis. Combining statistical parameter analysis and the fitted time-course analysis, fNIRS could be a promising method to study visual fatigue and possibly other multi-process neural bases.

  1. A dual-adaptive support-based stereo matching algorithm

    Science.gov (United States)

    Zhang, Yin; Zhang, Yun

    2017-07-01

    Many stereo matching algorithms use fixed color thresholds and a rigid cross skeleton to segment supports (viz., Cross method), which, however, does not work well for different images. To address this issue, this paper proposes a novel dual adaptive support (viz., DAS)-based stereo matching method, which uses both appearance and shape information of a local region to segment supports automatically, and, then, integrates the DAS-based cost aggregation with the absolute difference plus census transform cost, scanline optimization and disparity refinement to develop a stereo matching system. The performance of the DAS method is also evaluated in the Middlebury benchmark and by comparing with the Cross method. The results show that the average error for the DAS method 25.06% lower than that for the Cross method, indicating that the proposed method is more accurate, with fewer parameters and suitable for parallel computing.

  2. Burst mode trigger of STEREO in situ measurements

    Science.gov (United States)

    Jian, L. K.; Russell, C. T.; Luhmann, J. G.; Curtis, D.; Schroeder, P.

    2013-06-01

    Since the launch of the STEREO spacecraft, the in situ instrument suites have continued to modify their burst mode trigger in order to optimize the collection of high-cadence magnetic field, solar wind, and suprathermal electron data. This report reviews the criteria used for the burst mode trigger and their evolution with time. From 2007 to 2011, the twin STEREO spacecraft observed 236 interplanetary shocks, and 54% of them were captured by the burst mode trigger. The capture rate increased remarkably with time, from 30% in 2007 to 69% in 2011. We evaluate the performance of multiple trigger criteria and investigate why some of the shocks were missed by the trigger. Lessons learned from STEREO are useful for future missions, because the telemetry bandwidth needed to capture the waveforms of high frequency but infrequent events would be unaffordable without an effective burst mode trigger.

  3. A NEW VIEW OF CORONAL WAVES FROM STEREO

    International Nuclear Information System (INIS)

    Ma, S.; Lin, J.; Zhao, S.; Li, Q.; Wills-Davey, M. J.; Attrill, G. D. R.; Golub, L.; Chen, P. F.; Chen, H.

    2009-01-01

    On 2007 December 7, there was an eruption from AR 10977, which also hosted a sigmoid. An EUV Imaging Telescope (EIT) wave associated with this eruption was observed by EUVI on board the Solar Terrestrial Relations Observatory (STEREO). Using EUVI images in the 171 A and the 195 A passbands from both STEREO A and B, we study the morphology and kinematics of this EIT wave. In the early stages, images of the EIT wave from the two STEREO spacecrafts differ markedly. We determine that the EUV fronts observed at the very beginning of the eruption likely include some intensity contribution from the associated coronal mass ejection (CME). Additionally, our velocity measurements suggest that the EIT wave front may propagate at nearly constant velocity. Both results offer constraints on current models and understanding of EIT waves.

  4. Terrain Relative Navigation for Planetary Landing using Stereo Vision : Measurements Obtained from Hazard Mapping

    NARCIS (Netherlands)

    Woicke, S.; Mooij, E.

    2017-01-01

    As a result of new aviation legislation, from 2019 on all air-carrier pilots are obliged to go through flight simulator-based stall recovery training. For this reason the Control and Simulation division at Delft University of Technology has set up a task force to develop a new methodology for

  5. A stereo vision method for tracking particle flow on the weld pool surface

    NARCIS (Netherlands)

    Zhao, C.X.; Richardson, I.M.; Kenjeres, S.; Kleijn, C.R.; Saldi, Z.

    2009-01-01

    The oscillation of a weld pool surface makes the fluid flow motion quite complex. Two-dimensional results cannot reflect enough information to quantitatively describe the fluid flow in the weld pool; however, there are few direct three-dimensional results available. In this paper, we describe a

  6. New Stereo Vision Digital Camera System for Simultaneous Measurement of Cloud Base Height and Atmospheric Visibility

    Science.gov (United States)

    Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.

    2013-12-01

    Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that allow its cost to remain low even with its increased functionality. Also, a new control software was also developed to ensure that the two cameras are triggered simultaneously. This is a major requirement that affects the final uncertainty of the measurements due to the constant movement of the clouds in the sky. Since accurate orientation of the cameras can be a very demanding task in field deployments, an automated calibration procedure has been developed, that removes the need for an accurate alignment. It consists on photographing the stars, which do not exhibit parallax due to the long distances involved, and deducing the inherent misalignments of the two cameras. The known misalignments are then used to correct the cloud photos. These developments will be described in the detail, along with an uncertainty analysis of the measurement setup. Measurements of cloud base height and atmospheric visibility will be presented and compared with measurements from other in-situ instruments. This work was supported by FCT project PTDC/CTE-ATM/115833/2009 and Program COMPETE FCOMP-01-0124-FEDER-014508

  7. Stereo vision with texture learning for fault-tolerant automatic baling

    DEFF Research Database (Denmark)

    Blas, Morten Rufus; Blanke, Mogens

    2010-01-01

    This paper presents advances in using stereovision for automating baling. A robust classification scheme is demonstrated for learning and classifying based on texture and shape. Using a state-of-the-art texton approach a fast classifier is obtained that can handle non-linearities in the data....... The addition of shape information makes the method robust to large variations and greatly reduces false alarms by applying tight geometrical constraints. The classifier is tested on data from a stereovision guidance system on a tractor. The system is able to classify cut plant material (called swath......) by learning it's appearance. A 3D classifier is used to train and supervise the texture classifier....

  8. Pavement Distress Evaluation Using 3D Depth Information from Stereo Vision

    Science.gov (United States)

    2012-07-01

    The focus of the current project funded by MIOH-UTC for the period 9/1/2010-8/31/2011 is to : enhance our earlier effort in providing a more robust image processing based pavement distress : detection and classification system. During the last few de...

  9. Sensor Fusion - Sonar and Stereo Vision, Using Occupancy Grids and SIFT

    DEFF Research Database (Denmark)

    Plascencia, Alfredo; Bendtsen, Jan Dimon

    2006-01-01

    to the occupied and empty regions. SIFT (Scale Invariant Feature Transform) feature descriptors are  interpreted using gaussian probabilistic error models. The use of occupancy grids is proposed for representing the sonar  as well as the features descriptors readings. The Bayesian estimation approach is applied...... to update the sonar and the SIFT descriptors' uncertainty grids. The sensor fusion yields a significant reduction in the uncertainty of the occupancy grid compared to the individual sensor readings....

  10. Development of an Image Fringe Zero Selection System for Structuring Elements with Stereo Vision Disparity Measurements

    International Nuclear Information System (INIS)

    Grindley, Josef E; Jiang Lin; Tickle, Andrew J

    2011-01-01

    When performing image operations involving Structuring Element (SE) and many transforms it is required that the outside of the image be padded with zeros or ones depending on the operation. This paper details how this can be achieved with simulated hardware using DSP Builder in Matlab with the intention of migrating the design to HDL (Hardware Description Language) and implemented on an FPGA (Field Programmable Gate Array). The design takes few resources and does not require extra memory to account for the change in size of the output image.

  11. Calibration of a dual-PTZ camera system for stereo vision

    Science.gov (United States)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2010-08-01

    In this paper, we propose a calibration process for the intrinsic and extrinsic parameters of dual-PTZ camera systems. The calibration is based on a complete definition of six coordinate systems fixed at the image planes, and the pan and tilt rotation axes of the cameras. Misalignments between estimated and ideal coordinates of image corners are formed into cost values to be solved by the Nelder-Mead simplex optimization method. Experimental results show that the system is able to obtain 3D coordinates of objects with a consistent accuracy of 1 mm when the distance between the dual-PTZ camera set and the objects are from 0.9 to 1.1 meters.

  12. Get a head in telepresence: active vision for remote intervention

    International Nuclear Information System (INIS)

    Pretlove, J.

    1996-01-01

    Despite advances in robotic systems, many tasks needing to be undertaken in hazardous environments require human control. The risk to human life can be reduced or minimised using an integrated control system comprising an active controllable stereo vision system and a virtual reality head-mounted display. The human operator is then immersed in and can interact with the remote environment in complete safety. An overview is presented of the design and development of just such an advanced, dynamic telepresence system, developed at the Department of Mechanical Engineering at the University of Surrey. (UK)

  13. PixonVision real-time video processor

    Science.gov (United States)

    Puetter, R. C.; Hier, R. G.

    2007-09-01

    PixonImaging LLC and DigiVision, Inc. have developed a real-time video processor, the PixonVision PV-200, based on the patented Pixon method for image deblurring and denoising, and DigiVision's spatially adaptive contrast enhancement processor, the DV1000. The PV-200 can process NTSC and PAL video in real time with a latency of 1 field (1/60 th of a second), remove the effects of aerosol scattering from haze, mist, smoke, and dust, improve spatial resolution by up to 2x, decrease noise by up to 6x, and increase local contrast by up to 8x. A newer version of the processor, the PV-300, is now in prototype form and can handle high definition video. Both the PV-200 and PV-300 are FPGA-based processors, which could be spun into ASICs if desired. Obvious applications of these processors include applications in the DOD (tanks, aircraft, and ships), homeland security, intelligence, surveillance, and law enforcement. If developed into an ASIC, these processors will be suitable for a variety of portable applications, including gun sights, night vision goggles, binoculars, and guided munitions. This paper presents a variety of examples of PV-200 processing, including examples appropriate to border security, battlefield applications, port security, and surveillance from unmanned aerial vehicles.

  14. On the functional order of binocular rivalry and blind spot filling-in.

    Science.gov (United States)

    Qian, Cheng S; Brascamp, Jan W; Liu, Taosheng

    2017-07-01

    Binocular rivalry is an important phenomenon for understanding the mechanisms of visual awareness. Here we assessed the functional locus of binocular rivalry relative to blind spot filling-in, which is thought to transpire in V1, thus providing a reference point for assessing the locus of rivalry. We conducted two experiments to explore the functional order of binocular rivalry and blind spot filling-in. Experiment 1 examined if the information filled-in at the blind spot can engage in rivalry with a physical stimulus at the corresponding location in the fellow eye. Participants' perceptual reports showed no difference between this condition and a condition where filling-in was precluded by presenting the same stimuli away from the blind spot, suggesting that the rivalry process is not influenced by any filling-in that might occur. In Experiment 2, we presented the fellow eye's stimulus directly in rivalry with the 'inducer' stimulus that surrounds the blind spot, and compared it with two control conditions away from the blind spot: one involving a ring physically identical to the inducer, and one involving a disc that resembled the filled-in percept. Perceptual reports in the blind spot condition resembled those in the 'ring' condition, more than those in the latter, 'disc' condition, indicating that a perceptually suppressed inducer does not engender filling-in. Thus, our behavioral data suggest binocular rivalry functionally precedes blind spot filling-in. We conjecture that the neural substrate of binocular rivalry suppression includes processing stages at or before V1. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Implementation of a Self-Consistent Stereo Processing Chain for 3D Stereo Reconstruction of the Lunar Landing Sites

    Science.gov (United States)

    Tasdelen, E.; Willner, K.; Unbekannt, H.; Glaeser, P.; Oberst, J.

    2014-04-01

    The department for Planetary Geodesy at Technical University Berlin is developing routines for photogrammetric processing of planetary image data to derive 3D representations of planetary surfaces. The Integrated Software for Imagers and Spectrometers (ISIS) software (Anderson et al., 2004), developed by USGS, Flagstaff, is readily available, open source, and very well documented. Hence, ISIS was chosen as a prime processing platform and tool kit. However, ISIS does not provide a full photogrammetric stereo processing chain. Several components like image matching, bundle block adjustment (until recently) or digital terrain model (DTM) interpolation from 3D object points are missing. Our group aims to complete this photogrammetric stereo processing chain by implementing the missing components, taking advantage of already existing ISIS classes and functionality. We report here on the current status of the development of our stereo processing chain and its first application on the Lunar Apollo landing sites.

  16. Low Vision Tips

    Science.gov (United States)

    ... this page: https://medlineplus.gov/lowvision.html MedlinePlus: Low Vision Tips We are sorry. MedlinePlus no longer maintains the For Low Vision Users page. You will still find health resources ...

  17. Chemicals Industry Vision

    Energy Technology Data Exchange (ETDEWEB)

    none,

    1996-12-01

    Chemical industry leaders articulated a long-term vision for the industry, its markets, and its technology in the groundbreaking 1996 document Technology Vision 2020 - The U.S. Chemical Industry. (PDF 310 KB).

  18. An Evaluation of the Effectiveness of Stereo Slides in Teaching Geomorphology.

    Science.gov (United States)

    Giardino, John R.; Thornhill, Ashton G.

    1984-01-01

    Provides information about producing stereo slides and their use in the classroom. Describes an evaluation of the teaching effectiveness of stereo slides using two groups of 30 randomly selected students from introductory geomorphology. Results from a pretest/postttest measure show that stereo slides significantly improved understanding. (JM)

  19. Using Weightless Neural Networks for Vergence Control in an Artificial Vision System

    Directory of Open Access Journals (Sweden)

    Karin S. Komati

    2003-01-01

    Full Text Available This paper presents a methodology we have developed and used to implement an artificial binocular vision system capable of emulating the vergence of eye movements. This methodology involves using weightless neural networks (WNNs as building blocks of artificial vision systems. Using the proposed methodology, we have designed several architectures of WNN-based artificial vision systems, in which images captured by virtual cameras are used for controlling the position of the ‘foveae’ of these cameras (high-resolution region of the images captured. Our best architecture is able to control the foveae vergence movements with average error of only 3.58 image pixels, which is equivalent to an angular error of approximately 0.629°.

  20. Effects of brief daily periods of unrestricted vision during early monocular form deprivation on development of visual area 2.

    Science.gov (United States)

    Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Harwerth, Ronald S; Smith, Earl L; Chino, Yuzo M

    2011-09-14

    Providing brief daily periods of unrestricted vision during early monocular form deprivation reduces the depth of amblyopia. To gain insights into the neural basis of the beneficial effects of this treatment, the binocular and monocular response properties of neurons were quantitatively analyzed in visual area 2 (V2) of form-deprived macaque monkeys. Beginning at 3 weeks of age, infant monkeys were deprived of clear vision in one eye for 12 hours every day until 21 weeks of age. They received daily periods of unrestricted vision for 0, 1, 2, or 4 hours during the form-deprivation period. After behavioral testing to measure the depth of the resulting amblyopia, microelectrode-recording experiments were conducted in V2. The ocular dominance imbalance away from the affected eye was reduced in the experimental monkeys and was generally proportional to the reduction in the depth of amblyopia in individual monkeys. There were no interocular differences in the spatial properties of V2 neurons in any subject group. However, the binocular disparity sensitivity of V2 neurons was significantly higher and binocular suppression was lower in monkeys that had unrestricted vision. The decrease in ocular dominance imbalance in V2 was the neuronal change most closely associated with the observed reduction in the depth of amblyopia. The results suggest that the degree to which extrastriate neurons can maintain functional connections with the deprived eye (i.e., reducing undersampling for the affected eye) is the most significant factor associated with the beneficial effects of brief periods of unrestricted vision.

  1. Wind-Sculpted Vicinity After Opportunity's Sol 1797 Drive (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11820 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11820 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 111 meters (364 feet) on the 1,797th Martian day, or sol, of Opportunity's surface mission (Feb. 12, 2009). North is at the center; south at both ends. This view is the right-eye member of a stereo pair presented as a cylindrical-perspective projection with geometric seam correction. Tracks from the drive recede northward across dark-toned sand ripples in the Meridiani Planum region of Mars. Patches of lighter-toned bedrock are visible on the left and right sides of the image. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). This view is presented as a cylindrical-perspective projection with geometric seam correction.

  2. People counting with stereo cameras : two template-based solutions

    NARCIS (Netherlands)

    Englebienne, Gwenn; van Oosterhout, Tim; Kröse, B.J.A.

    2012-01-01

    People counting is a challenging task with many applications. We propose a method with a fixed stereo camera that is based on projecting a template onto the depth image. The method was tested on a challenging outdoor dataset with good results and runs in real time.

  3. [EEG technician-nurse collaboration during stereo-electroencephalography].

    Science.gov (United States)

    Jomard, Caroline; Benghezal, Mouna; Cheramy, Isabelle; De Beaumont, Ségolène

    2017-01-01

    Drug-resistant epilepsy has significant repercussions on the daily life of children. Surgery may represent a hope. The nurse and the electroencephalogram technician carry out important teamwork during pre-surgical assessment tests and notably the stereo-electroencephalography. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  4. VPython: Python plus Animations in Stereo 3D

    Science.gov (United States)

    Sherwood, Bruce

    2004-03-01

    Python is a modern object-oriented programming language. VPython (http://vpython.org) is a combination of Python (http://python.org), the Numeric module from LLNL (http://www.pfdubois.com/numpy), and the Visual module created by David Scherer, all of which have been under continuous development as open source projects. VPython makes it easy to write programs that generate real-time, navigable 3D animations. The Visual module includes a set of 3D objects (sphere, cylinder, arrow, etc.), tools for creating other shapes, and support for vector algebra. The 3D renderer runs in a parallel thread, and animations are produced as a side effect of computations, freeing the programmer to concentrate on the physics. Applications include educational and research visualization. In the Fall of 2003 Hugh Fisher at the Australian National University, John Zelle at Wartburg College, and I contributed to a new stereo capability of VPython. By adding a single statement to an existing VPython program, animations can be viewed in true stereo 3D. One can choose several modes: active shutter glasses, passive polarized glasses, or colored glasses (e.g. red-cyan). The talk will demonstrate the new stereo capability and discuss the pros and cons of various schemes for display of stereo 3D for a large audience. Supported in part by NSF grant DUE-0237132.

  5. A Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    Science.gov (United States)

    Buyens, Wim; van Dijk, Bas; Wouters, Jan; Moonen, Marc

    2015-10-01

    Listening to music is still one of the more challenging aspects of using a cochlear implant (CI) for most users. Simple musical structures, a clear rhythm/beat, and lyrics that are easy to follow are among the top factors contributing to music appreciation for CI users. Modifying the audio mix of complex music potentially improves music enjoyment in CI users. A stereo music preprocessing scheme is described in which vocals, drums, and bass are emphasized based on the representation of the harmonic and the percussive components in the input spectrogram, combined with the spatial allocation of instruments in typical stereo recordings. The scheme is assessed with postlingually deafened CI subjects (N = 7) using pop/rock music excerpts with different complexity levels. The scheme is capable of modifying relative instrument level settings, with the aim of improving music appreciation in CI users, and allows individual preference adjustments. The assessment with CI subjects confirms the preference for more emphasis on vocals, drums, and bass as offered by the preprocessing scheme, especially for songs with higher complexity. The stereo music preprocessing scheme has the potential to improve music enjoyment in CI users by modifying the audio mix in widespread (stereo) music recordings. Since music enjoyment in CI users is generally poor, this scheme can assist the music listening experience of CI users as a training or rehabilitation tool.

  6. Characterising atmospheric optical turbulence using stereo-SCIDAR

    Science.gov (United States)

    Osborn, James; Butterley, Tim; Föhring, Dora; Wilson, Richard

    2015-04-01

    Stereo-SCIDAR (SCIntillation Detection and Ranging) is a development to the well known SCIDAR method for characterisation of the Earth's atmospheric optical turbulence. Here we present some interesting capabilities, comparisons and results from a recent campaign on the 2.5 m Isaac Newton Telescope on La Palma.

  7. Solving the uncalibrated photometric stereo problem using total variation

    DEFF Research Database (Denmark)

    Quéau, Yvain; Lauze, Francois Bernard; Durou, Jean-Denis

    2013-01-01

    In this paper we propose a new method to solve the problem of uncalibrated photometric stereo, making very weak assumptions on the properties of the scene to be reconstructed. Our goal is to solve the generalized bas-relief ambiguity (GBR) by performing a total variation regularization of both...

  8. Utility of Digital Stereo Images for Optic Disc Evaluation

    Science.gov (United States)

    Ying, Gui-shuang; Pearson, Denise J.; Bansal, Mayank; Puri, Manika; Miller, Eydie; Alexander, Judith; Piltz-Seymour, Jody; Nyberg, William; Maguire, Maureen G.; Eledath, Jayan; Sawhney, Harpreet

    2010-01-01

    Purpose. To assess the suitability of digital stereo images for optic disc evaluations in glaucoma. Methods. Stereo color optic disc images in both digital and 35-mm slide film formats were acquired contemporaneously from 29 subjects with various cup-to-disc ratios (range, 0.26–0.76; median, 0.475). Using a grading scale designed to assess image quality, the ease of visualizing optic disc features important for glaucoma diagnosis, and the comparative diameters of the optic disc cup, experienced observers separately compared the primary digital stereo images to each subject's 35-mm slides, to scanned images of the same 35-mm slides, and to grayscale conversions of the digital images. Statistical analysis accounted for multiple gradings and comparisons and also assessed image formats under monoscopic viewing. Results. Overall, the quality of primary digital color images was judged superior to that of 35-mm slides (P digital color images were mostly equivalent to the scanned digitized images of the same slides. Color seemingly added little to grayscale optic disc images, except that peripapillary atrophy was best seen in color (P digital over film images was maintained under monoscopic viewing conditions. Conclusions. Digital stereo optic disc images are useful for evaluating the optic disc in glaucoma and allow the application of advanced image processing applications. Grayscale images, by providing luminance distinct from color, may be informative for assessing certain features. PMID:20505199

  9. Review of literature on hearing damage by personal stereo

    DEFF Research Database (Denmark)

    Hammershøi, Dorte; Ordoñez, Rodrigo Pizarro

    2006-01-01

    In the 1980s and 1990s there was a general concern for the high levels that personal stereo systems were capable of producing. At that time no standardized method for the determination of exposure levels existed, which could have contributed to overly conservative conclusions. With the publicatio...

  10. Loss of Binocular Vision in Monocularly Blind Patients Causes Selective Degeneration of the Superior Lateral Occipital Cortices

    NARCIS (Netherlands)

    Prins, Doety D; Jansonius, Nomdo M.; Cornelissen, Frans W.

    2017-01-01

    PURPOSE. Chronic ocular pathology, such as glaucoma and macular degeneration, is associated with neuroanatomic changes in the visual pathways. It is a challenge to determine the mechanism responsible for these changes. This could be functional deprivation or transsynaptic degeneration. Acquired

  11. Disparity-driven vs blur-driven models of accommodation and convergence in binocular vision and intermittent strabismus.

    Science.gov (United States)

    Horwood, Anna M; Riddell, Patricia M

    2014-12-01

    To propose an alternative and practical model to conceptualize clinical patterns of concomitant intermittent strabismus, heterophoria, and convergence and accommodation anomalies. Despite identical ratios, there can be a disparity- or blur-biased "style" in three hypothetical scenarios: normal; high ratio of accommodative convergence to accommodation (AC/A) and low ratio of convergence accommodation to convergence (CA/C); low AC/A and high CA/C. We calculated disparity bias indices (DBI) to reflect these biases and provide early objective data from small illustrative clinical groups that fit these styles. Normal adults (n = 56) and children (n = 24) showed disparity bias (adult DBI 0.43 [95% CI, 0.50-0.36], child DBI 0.20 [95% CI, 0.31-0.07]; P = 0.001). Accommodative esotropia (n = 3) showed less disparity-bias (DBI 0.03). In the high AC/A-low CA/C scenario, early presbyopia (n = 22) showed mean DBI of 0.17 (95% CI, 0.28-0.06), compared to DBI of -0.31 in convergence excess esotropia (n=8). In the low AC/A-high CA/C scenario near exotropia (n = 17) showed mean DBI of 0.27. DBI ranged between 1.25 and -1.67. Establishing disparity or blur bias adds to AC/A and CA/C ratios to explain clinical patterns. Excessive bias or inflexibility in near-cue use increases risk of clinical problems. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  12. A child's vision.

    Science.gov (United States)

    Nye, Christina

    2014-06-01

    Implementing standard vision screening techniques in the primary care practice is the most effective means to detect children with potential vision problems at an age when the vision loss may be treatable. A critical period of vision development occurs in the first few weeks of life; thus, it is imperative that serious problems are detected at this time. Although it is not possible to quantitate an infant's vision, evaluating ocular health appropriately can mean the difference between sight and blindness and, in the case of retinoblastoma, life or death. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Three-dimensional vision enhances task performance independently of the surgical method.

    Science.gov (United States)

    Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A

    2012-10-01

    Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.

  14. VisGraB: A Benchmark for Vision-Based Grasping. Paladyn Journal of Behavioral Robotics

    DEFF Research Database (Denmark)

    Kootstra, Gert; Popovic, Mila; Jørgensen, Jimmy Alison

    2012-01-01

    that a large number of grasps can be executed and evaluated while dealing with dynamics and the noise and uncertainty present in the real world images. VisGraB enables a fair comparison among different grasping methods. The user furthermore does not need to deal with robot hardware, focusing on the vision......We present a database and a software tool, VisGraB, for benchmarking of methods for vision-based grasping of unknown objects with no prior object knowledge. The benchmark is a combined real-world and simulated experimental setup. Stereo images of real scenes containing several objects in different...

  15. Vision-based vehicle detection and tracking algorithm design

    Science.gov (United States)

    Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi

    2009-12-01

    The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.

  16. A Hybrid Vision-Map Method for Urban Road Detection

    Directory of Open Access Journals (Sweden)

    Carlos Fernández

    2017-01-01

    Full Text Available A hybrid vision-map system is presented to solve the road detection problem in urban scenarios. The standardized use of machine learning techniques in classification problems has been merged with digital navigation map information to increase system robustness. The objective of this paper is to create a new environment perception method to detect the road in urban environments, fusing stereo vision with digital maps by detecting road appearance and road limits such as lane markings or curbs. Deep learning approaches make the system hard-coupled to the training set. Even though our approach is based on machine learning techniques, the features are calculated from different sources (GPS, map, curbs, etc., making our system less dependent on the training set.

  17. Joint depth map and color consistency estimation for stereo images with different illuminations and cameras.

    Science.gov (United States)

    Heo, Yong Seok; Lee, Kyoung Mu; Lee, Sang Uk

    2013-05-01

    Abstract—In this paper, we propose a method that infers both accurate depth maps and color-consistent stereo images for radiometrically varying stereo images. In general, stereo matching and performing color consistency between stereo images are a chicken-and-egg problem since it is not a trivial task to simultaneously achieve both goals. Hence, we have developed an iterative framework in which these two processes can boost each other. First, we transform the input color images to log-chromaticity color space, from which a linear relationship can be established during constructing a joint pdf of transformed left and right color images. From this joint pdf, we can estimate a linear function that relates the corresponding pixels in stereo images. Based on this linear property, we present a new stereo matching cost by combining Mutual Information (MI), SIFT descriptor, and segment-based plane-fitting to robustly find correspondence for stereo image pairs which undergo radiometric variations. Meanwhile, we devise a Stereo Color Histogram Equalization (SCHE) method to produce color-consistent stereo image pairs, which conversely boost the disparity map estimation. Experimental results show that our method produces both accurate depth maps and color-consistent stereo images, even for stereo images with severe radiometric differences.

  18. Large Binocular Telescope Observations of Europa Occulting Io's Volcanoes at 4.8 μm

    Science.gov (United States)

    Skrutskie, Michael F.; Conrad, Albert; Resnick, Aaron; Leisenring, Jarron; Hinz, Phil; de Pater, Imke; de Kleer, Katherine; Spencer, John; Skemer, Andrew; Woodward, Charles E.; Davies, Ashley Gerard; Defrére, Denis

    2015-11-01

    On 8 March 2015 Europa passed nearly centrally in front of Io. The Large Binocular Telescope observed this event in dual-aperture AO-corrected Fizeau interferometric imaging mode using the mid-infrared imager LMIRcam operating behind the Large Binocular Telescope Interferometer (LBTI) at a broadband wavelength of 4.8 μm (M-band). Occultation light curves generated from frames recorded every 123 milliseconds show that both Loki and Pele/Pillan were well resolved. Europa's center shifted by 2 kilometers relative to Io from frame-to-frame. The derived light curve for Loki is consistent with the double-lobed structure reported by Conrad et al. (2015) using direct interferometric imaging with LBTI.

  19. The influence of chromatic context on binocular color rivalry: Perception and neural representation

    Science.gov (United States)

    Hong, Sang Wook; Shevell, Steven K.

    2008-01-01

    The predominance of rivalrous targets is affected by surrounding context when stimuli rival in orientation, motion or color. This study investigated the influence of chromatic context on binocular color rivalry. The predominance of rivalrous chromatic targets was measured in various surrounding contexts. The first experiment showed that a chromatic surround's influence was stronger when the surround was uniform or a grating with luminance contrast (chromatic/black grating) compared to an equiluminant grating (chromatic/white). The second experiment revealed virtually no effect of the orientation of the surrounding chromatic context, using chromatically rivalrous vertical gratings. These results are consistent with a chromatic representation of the context by a non-oriented, chromatically selective and spatially antagonistic receptive field. Neither a double-opponent receptive field nor a receptive field without spatial antagonism accounts for the influence of context on binocular color rivalry. PMID:18331750

  20. An iPod treatment of amblyopia: an updated binocular approach.

    Science.gov (United States)

    Hess, Robert F; Thompson, B; Black, J M; Machara, G; Zhang, P; Bobier, W R; Cooperstock, J

    2012-02-15

    We describe the successful translation of computerized and space-consuming laboratory equipment for the treatment of suppression to a small handheld iPod device (Apple iPod; Apple Inc., Cupertino, California). A portable and easily obtainable Apple iPod display, using current video technology offers an ideal solution for the clinical treatment of suppression. The following is a description of the iPod device and illustrates how a video game has been adapted to provide the appropriate stimulation to implement our recent antisuppression treatment protocol. One to 2 hours per day of video game playing under controlled conditions for 1 to 3 weeks can improve acuity and restore binocular function, including stereopsis in adults, well beyond the age at which traditional patching is used. This handheld platform provides a convenient and effective platform for implementing the newly proposed binocular treatment of amblyopia in the clinic, home, or elsewhere. American Optometric Association.

  1. Vision Health-Related Quality of Life in Chinese Glaucoma Patients

    Directory of Open Access Journals (Sweden)

    Lei Zuo

    2015-01-01

    Full Text Available This cross-sectional study evaluated VRQOL in Chinese glaucoma patients and the potential factors influencing VRQOL. The VRQOL was assessed using the Chinese-version low vision quality of life questionnaire. Visual field loss was classified by the Hodapp-Parrish-Anderson method. The correlations of VRQOL to the best corrected visual acuity and the VF loss were investigated. The potential impact factors to VRQOL of glaucoma patients were screened by single factor analysis and were further analyzed by multiple regression analysis. There were significant differences in VRQOL scores between mild VF loss group and moderate VF loss group, moderate VF loss group and severe VF loss group, and mild VF loss group and severe VF loss group according to the better eye. In multiple linear regression, the binocular weighted average BCVA significantly affected the VRQOL scores. Binocular MD was the second influencing factor. In logistic regression, binocular severe VF loss and stroke were significantly associated with abnormal VRQOL. Education was the next influencing factor. This study showed that visual acuity correlated linearly with VRQOL, and VF loss might reach a certain level, correlating with abnormal VRQOL scores. Stroke was significantly associated with abnormal VRQOL.

  2. Vision Assessment and Prescription of Low Vision Devices

    OpenAIRE

    Keeffe, Jill

    2004-01-01

    Assessment of vision and prescription of low vision devices are part of a comprehensive low vision service. Other components of the service include training the person affected by low vision in use of vision and other senses, mobility, activities of daily living, and support for education, employment or leisure activities. Specialist vision rehabilitation agencies have services to provide access to information (libraries) and activity centres for groups of people with impaired vision.

  3. Quantitative measurement of binocular color fusion limit for non-spectral colors.

    Science.gov (United States)

    Jung, Yong Ju; Sohn, Hosik; Lee, Seong-il; Ro, Yong Man; Park, Hyun Wook

    2011-04-11

    Human perception becomes difficult in the event of binocular color fusion when the color difference presented for the left and right eyes exceeds a certain threshold value, known as the binocular color fusion limit. This paper discusses the binocular color fusion limit for non-spectral colors within the color gamut of a conventional LCD 3DTV. We performed experiments to measure the color fusion limit for eight chromaticity points sampled from the CIE 1976 chromaticity diagram. A total of 2480 trials were recorded for a single observer. By analyzing the results, the color fusion limit was quantified by ellipses in the chromaticity diagram. The semi-minor axis of the ellipses ranges from 0.0415 to 0.0923 in terms of the Euclidean distance in the u'v´ chromaticity diagram and the semi-major axis ranges from 0.0640 to 0.1560. These eight ellipses are drawn on the chromaticity diagram. © 2011 Optical Society of America

  4. Calculation method of CGH for Binocular Eyepiece-Type Electro Holography

    International Nuclear Information System (INIS)

    Yang, Chanyoung; Yoneyama, Takuo; Sakamoto, Yuji; Okuyama, Fumio

    2013-01-01

    We had researched about eyepiece-type electro holography to display 3-D images of larger objects at wider angle. We had enlarged visual field considering depth of object with Fourier optical system using two lenses. In this paper, we extend our system for binocular. In the binocular system, we use two different holograms for each eye. The 3-D image for left eye should be observed like the real object observed using left eye and the same for right eye. So, we propose a method of calculation of computer-generated hologram (CGH) transforming the coordinate system of the model data to make two holograms for binocular eyepiece-type electro holography. The coordinate system of original model data is called the world coordinate system. The left and the right coordinate system are transformed from the world coordinate system. We also propose the method for correcting the installation error that occurs when placing the electronic and optical devices. The installation error is calculated and the model data is corrected using the distance between measured position and setup position of the reconstructed image Optical reconstruction experiments were carried out to verify the proposed method.

  5. FPGA Vision Data Architecture

    Science.gov (United States)

    Morfopoulos, Arin C.; Pham, Thang D.

    2013-01-01

    JPL has produced a series of FPGA (field programmable gate array) vision algorithms that were written with custom interfaces to get data in and out of each vision module. Each module has unique requirements on the data interface, and further vision modules are continually being developed, each with their own custom interfaces. Each memory module had also been designed for direct access to memory or to another memory module.

  6. Corneal Transplantation in Disease Affecting Only One Eye: Does It Make a Difference to Habitual Binocular Viewing?

    Directory of Open Access Journals (Sweden)

    Praveen K Bandela

    Full Text Available Clarity of the transplanted tissue and restoration of visual acuity are the two primary metrics for evaluating the success of corneal transplantation. Participation of the transplanted eye in habitual binocular viewing is seldom evaluated post-operatively. In unilateral corneal disease, the transplanted eye may remain functionally inactive during binocular viewing due to its suboptimal visual acuity and poor image quality, vis-à-vis the healthy fellow eye.This study prospectively quantified the contribution of the transplanted eye towards habitual binocular viewing in 25 cases with unilateral transplants [40 yrs (IQR: 32-42 yrs and 25 age-matched controls [30 yrs (25-37 yrs]. Binocular functions including visual field extent, high-contrast logMAR acuity, suppression threshold and stereoacuity were assessed using standard psychophysical paradigms. Optical quality of all eyes was determined from wavefront aberrometry measurements. Binocular visual field expanded by a median 21% (IQR: 18-29% compared to the monocular field of cases and controls (p = 0.63. Binocular logMAR acuity [0.0 (0.0-0.0] almost always followed the fellow eye's acuity [0.00 (0.00 --0.02] (r = 0.82, independent of the transplanted eye's acuity [0.34 (0.2-0.5] (r = 0.04. Suppression threshold and stereoacuity were poorer in cases [30.1% (13.5-44.3%; 620.8 arc sec (370.3-988.2 arc sec] than in controls [79% (63.5-100%; 16.3 arc sec (10.6-25.5 arc sec] (p<0.001. Higher-order wavefront aberrations of the transplanted eye [0.34 μ (0.21-0.51 μ] were higher than the fellow eye [0.07 μ (0.05-0.11 μ] (p<0.001 and their reduction with RGP contact lenses [0.09 μ (0.08-0.12 μ] significantly improved the suppression threshold [65% (50-72%] and stereoacuity [56.6 arc sec (47.7-181.6 arc sec] (p<0.001.In unilateral corneal disease, the transplanted eye does participate in gross binocular viewing but offers limited support to fine levels of binocularity. Improvement in the transplanted

  7. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  8. Field study of sound exposure by personal stereo

    DEFF Research Database (Denmark)

    Ordoñez, Rodrigo Pizarro; Reuter, Karen; Hammershøi, Dorte

    2006-01-01

    A number of large scale studies suggest that the exposure level used with personal stereo systems should raise concern. High levels can be produced by most commercially available mp3 players, and they are generally used in high background noise levels (i.e., while in a bus or rain). A field study...... on young people's habitual sound exposure to personal stereos has been carried out using a measurement method according to principles of ISO 11904-2:2004. Additionally the state of their hearing has also been assessed. This presentation deals with the methodological aspects relating to the quantification...... of habitual use, estimation of listening levels and exposure levels, and assessment of their state of hearing, by either threshold determination or OAE measurement, with a special view to the general validity of the results (uncertainty factors and their magnitude)....

  9. Critical factors in SEM 3D stereo microscopy

    DEFF Research Database (Denmark)

    Marinello, F.; Bariano, P.; Savio, E.

    2008-01-01

    This work addresses dimensional measurements performed with the scanning electron microscope (SEM) using 3D reconstruction of surface topography through stereo-photogrammetry. The paper presents both theoretical and experimental investigations, on the effects of instrumental variables and measure......This work addresses dimensional measurements performed with the scanning electron microscope (SEM) using 3D reconstruction of surface topography through stereo-photogrammetry. The paper presents both theoretical and experimental investigations, on the effects of instrumental variables...... factors are recognized: the first one is related to the measurement operation and the instrument set-up; the second concerns the quality of scanned images and represents the major criticality in the application of SEMs for 3D characterizations....

  10. Track fitting in the opal vertex detector with stereo wires

    Energy Technology Data Exchange (ETDEWEB)

    Shally, R; Hemingway, R J; McPherson, A C

    1987-10-01

    The geometry of the vertex chamber for the OPAL detector at LEP is reviewed and expressions for the coordinates of the hits are given in terms of the measured drift distance and z-coordinate. The tracks are fitted by a procedure based on the Lagrange multipliers method. The increase in the accuracy of the fit due to the use of the stereo wires is discussed.

  11. A stereo-controlled route to conjugated E-enediynes

    Institute of Scientific and Technical Information of China (English)

    ZHOU Lei; JIANG Huanfeng

    2007-01-01

    3-Ene-1,5-diynes are important components of many enediyne antitumor agents and luminescent materials.A stereo-controlled approach to the synthesis of E-enediynes was developed,and it consists of the following two steps:(1)a mild and economical synthesis of dihalo vinyl derivatives via addition of CuBr2 to alkynes;(2) the Sonogashira coupling reaction of the dihalo vinyl derivatives with terminal alkynes to form conjugated enediynes.

  12. Cellular neural networks for the stereo matching problem

    International Nuclear Information System (INIS)

    Taraglio, S.; Zanela, A.

    1997-03-01

    The applicability of the Cellular Neural Network (CNN) paradigm to the problem of recovering information on the tridimensional structure of the environment is investigated. The approach proposed is the stereo matching of video images. The starting point of this work is the Zhou-Chellappa neural network implementation for the same problem. The CNN based system we present here yields the same results as the previous approach, but without the many existing drawbacks

  13. Track fitting in the opal vertex detector with stereo wires

    International Nuclear Information System (INIS)

    Shally, R.; Hemingway, R.J.; McPherson, A.C.

    1987-01-01

    The geometry of the vertex chamber for the OPAL detector at LEP is reviewed and expressions for the coordinates of the hits are given in terms of the measured drift distance and z-coordinate. The tracks are fitted by a procedure based on the Lagrange multipliers method. The increase in the accuracy of the fit due to the use of the stereo wires is discussed. (orig.)

  14. A flexible calibration method for laser displacement sensors based on a stereo-target

    International Nuclear Information System (INIS)

    Zhang, Jie; Sun, Junhua; Liu, Zhen; Zhang, Guangjun

    2014-01-01

    Laser displacement sensors (LDSs) are widely used in online measurement owing to their characteristics of non-contact, high measurement speed, etc. However, existing calibration methods for LDSs based on the traditional triangulation measurement model are time-consuming and tedious to operate. In this paper, a calibration method for LDSs based on a vision measurement model of the LDS is presented. According to the constraint relationships of the model parameters, the calibration is implemented by freely moving a stereo-target at least twice in the field of view of the LDS. Both simulation analyses and real experiments were conducted. Experimental results demonstrate that the calibration method achieves an accuracy of 0.044 mm within the measurement range of about 150 mm. Compared to traditional calibration methods, the proposed method has no special limitation on the relative position of the LDS and the target. The linearity approximation of the measurement model in the calibration is not needed, and thus the measurement range is not limited in the linearity range. It is easy and quick to implement the calibration for the LDS. The method can be applied in wider fields. (paper)

  15. Dust Devil in Spirit's View Ahead on Sol 1854 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11960 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11960 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,854th Martian day, or sol, of Spirit's surface mission (March 21, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 13.79 meters (45 feet) westward earlier on Sol 1854. West is at the center, where a dust devil is visible in the distance. North on the right, where Husband Hill dominates the horizon; Spirit was on top of Husband Hill in September and October 2005. South is on the left, where lighter-toned rock lines the edge of the low plateau called 'Home Plate.' This view is presented as a cylindrical-perspective projection with geometric seam correction.

  16. View Ahead After Spirit's Sol 1861 Drive (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11977 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11977 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images combined into this stereo, 210-degree view of the rover's surroundings during the 1,861st to 1,863rd Martian days, or sols, of Spirit's surface mission (March 28 to 30, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The center of the scene is toward the south-southwest. East is on the left. West-northwest is on the right. The rover had driven 22.7 meters (74 feet) southwestward on Sol 1861 before beginning to take the frames in this view. The drive brought Spirit past the northwestern corner of Home Plate. In this view, the western edge of Home Plate is on the portion of the horizon farthest to the left. A mound in middle distance near the center of the view is called 'Tsiolkovsky' and is about 40 meters (about 130 feet) from the rover's position. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  17. Time for a Change; Spirit's View on Sol 1843 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11973 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11973 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, full-circle view of the rover's surroundings during the 1,843rd Martian day, or sol, of Spirit's surface mission (March 10, 2009). South is in the middle. North is at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 36 centimeters downhill earlier on Sol 1854, but had not been able to get free of ruts in soft material that had become an obstacle to getting around the northeastern corner of the low plateau called 'Home Plate.' The Sol 1854 drive, following two others in the preceding four sols that also achieved little progress in the soft ground, prompted the rover team to switch to a plan of getting around Home Plate counterclockwise, instead of clockwise. The drive direction in subsequent sols was westward past the northern edge of Home Plate. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  18. Parallel Computer System for 3D Visualization Stereo on GPU

    Science.gov (United States)

    Al-Oraiqat, Anas M.; Zori, Sergii A.

    2018-03-01

    This paper proposes the organization of a parallel computer system based on Graphic Processors Unit (GPU) for 3D stereo image synthesis. The development is based on the modified ray tracing method developed by the authors for fast search of tracing rays intersections with scene objects. The system allows significant increase in the productivity for the 3D stereo synthesis of photorealistic quality. The generalized procedure of 3D stereo image synthesis on the Graphics Processing Unit/Graphics Processing Clusters (GPU/GPC) is proposed. The efficiency of the proposed solutions by GPU implementation is compared with single-threaded and multithreaded implementations on the CPU. The achieved average acceleration in multi-thread implementation on the test GPU and CPU is about 7.5 and 1.6 times, respectively. Studying the influence of choosing the size and configuration of the computational Compute Unified Device Archi-tecture (CUDA) network on the computational speed shows the importance of their correct selection. The obtained experimental estimations can be significantly improved by new GPUs with a large number of processing cores and multiprocessors, as well as optimized configuration of the computing CUDA network.

  19. Three-dimensional image reconstruction from stereo DSA

    International Nuclear Information System (INIS)

    Sakamoto, Kiyoshi; Kotoura, Noriko; Umehara, Takayoshi; Yamada, Eiji; Inaba, Tomohiro; Itou, Hiroshi

    1999-01-01

    The technique of interventional radiology has spread rapidly in recent years, and three-dimensional information from blood vessel images is being sought to enhance examinations. Stereo digital subtraction angiography (DSA) and rotational DSA were developed for that purpose. However, it is difficult with stereo DSA to observe the image pair during examination and to obtain positional information on blood vessels. Further, the exposure dose is increased in rotational DSA when many mask images need to be collected, and the patient is required to hold his or her breath for a long duration. We therefore devised a technique to construct three-dimensional blood vessel images by employing geometrical information extracted from stereo DSA images using the right and left images. We used a judgment method based on the correlation coefficient, although we had to extract an equal blood vessel from the right and left images to determine the three-dimensional coordinates of the blood vessel. The reconstructed three-dimensional blood vessels were projected from various angles, again by using a virtual focus, and new images were created. These image groups were displayed as rotational images by the animation display function incorporated in the DSA device. This system can observe blood vessel images of the same phase at a free angle, although the image quality is inferior to that of rotational DSA. In addition, because collection of the mask images is reduced, exposure dose can be decreased. Further, the system offers enhanced safety because no mechanical movement of the imaging system is involved. (author)

  20. Does functional vision behave differently in low-vision patients with diabetic retinopathy?--A case-matched study.

    Science.gov (United States)

    Ahmadian, Lohrasb; Massof, Robert

    2008-09-01

    A retrospective case-matched study designed to compare patients with diabetic retinopathy (DR) and other ocular diseases, managed in a low-vision clinic, in four different types of functional vision. Reading, mobility, visual motor, and visual information processing were measured in the patients (n = 114) and compared with those in patients with other ocular diseases (n = 114) matched in sex, visual acuity (VA), general health status, and age, using the Activity Inventory as a Rasch-scaled measurement tool. Binocular distance visual acuity was categorized as normal (20/12.5-20/25), near normal (20/32-20/63), moderate (20/80-20/160), severe (20/200-20/400), profound (20/500-20/1000), and total blindness (20/1250 to no light perception). Both Wilcoxon matched pairs signed rank test and the sign test of matched pairs were used to compare estimated functional vision measures between DR cases and controls. Cases ranged in age from 19 to 90 years (mean age, 67.5), and 59% were women. The mean visual acuity (logMar scale) was 0.7. Based on the Wilcoxon signed rank test analyses and after adjusting the probability for multiple comparisons, there was no statistically significant difference (P > 0.05) between patients with DR and control subjects in any of four functional visions. Furthermore, diabetic retinopathy patients did not differ (P > 0.05) from their matched counterparts in goal-level vision-related functional ability and total visual ability. Visual impairment in patients with DR appears to be a generic and non-disease-specific outcome that can be explained mainly by the end impact of the disease in the patients' daily lives and not by the unique disease process that results in the visual impairment.