WorldWideScience

Sample records for binocular stereo vision

  1. Study on flexible calibration method for binocular stereo vision system

    Science.gov (United States)

    Wang, Peng; Sun, Huashu; Sun, Changku

    2008-12-01

    Using a binocular stereo vision system for 3D coordinate measurement, system calibration is an important factor for measurement precision. In this paper we present a flexible calibration method for binocular stereo system calibration to estimate the intrinsic and extrinsic parameters of each camera and the exterior orientation of the turntable's axis which is installed in front of the binocular stereo vision system to increase the system measurement range. Using a new flexible planar pattern with four big circles and an array of small circles as reference points for calibration, binocular stereo calibration is realized with Zhang Plane-based calibration method without specialized knowledge of 3D geometry. By putting a standard ball in front of the binocular stereo vision system, a sequence pictures is taken at the same by both camera with a few different rotation angles of the turntable. With the method of space intersection of two straight lines, the reference points, the ball center at each turntable rotation angles, for axis calibration are figured out. Because of the rotation of the turntable, the trace of ball is a circle, whose center is on the turntable's axis. All ball centers rotated are in a plane perpendicular to the axis. The exterior orientation of the turntable axis is calibrated according to the calibration model. The measurement on a column bearing is performed in the experiment, with the final measurement precision better than 0.02mm.

  2. Railway clearance intrusion detection method with binocular stereo vision

    Science.gov (United States)

    Zhou, Xingfang; Guo, Baoqing; Wei, Wei

    2018-03-01

    In the stage of railway construction and operation, objects intruding railway clearance greatly threaten the safety of railway operation. Real-time intrusion detection is of great importance. For the shortcomings of depth insensitive and shadow interference of single image method, an intrusion detection method with binocular stereo vision is proposed to reconstruct the 3D scene for locating the objects and judging clearance intrusion. The binocular cameras are calibrated with Zhang Zhengyou's method. In order to improve the 3D reconstruction speed, a suspicious region is firstly determined by background difference method of a single camera's image sequences. The image rectification, stereo matching and 3D reconstruction process are only executed when there is a suspicious region. A transformation matrix from Camera Coordinate System(CCS) to Track Coordinate System(TCS) is computed with gauge constant and used to transfer the 3D point clouds into the TCS, then the 3D point clouds are used to calculate the object position and intrusion in TCS. The experiments in railway scene show that the position precision is better than 10mm. It is an effective way for clearance intrusion detection and can satisfy the requirement of railway application.

  3. Optimization of Stereo Matching in 3D Reconstruction Based on Binocular Vision

    Science.gov (United States)

    Gai, Qiyang

    2018-01-01

    Stereo matching is one of the key steps of 3D reconstruction based on binocular vision. In order to improve the convergence speed and accuracy in 3D reconstruction based on binocular vision, this paper adopts the combination method of polar constraint and ant colony algorithm. By using the line constraint to reduce the search range, an ant colony algorithm is used to optimize the stereo matching feature search function in the proposed search range. Through the establishment of the stereo matching optimization process analysis model of ant colony algorithm, the global optimization solution of stereo matching in 3D reconstruction based on binocular vision system is realized. The simulation results show that by the combining the advantage of polar constraint and ant colony algorithm, the stereo matching range of 3D reconstruction based on binocular vision is simplified, and the convergence speed and accuracy of this stereo matching process are improved.

  4. Bubble behavior characteristics based on virtual binocular stereo vision

    Science.gov (United States)

    Xue, Ting; Xu, Ling-shuang; Zhang, Shang-zhen

    2018-01-01

    The three-dimensional (3D) behavior characteristics of bubble rising in gas-liquid two-phase flow are of great importance to study bubbly flow mechanism and guide engineering practice. Based on the dual-perspective imaging of virtual binocular stereo vision, the 3D behavior characteristics of bubbles in gas-liquid two-phase flow are studied in detail, which effectively increases the projection information of bubbles to acquire more accurate behavior features. In this paper, the variations of bubble equivalent diameter, volume, velocity and trajectory in the rising process are estimated, and the factors affecting bubble behavior characteristics are analyzed. It is shown that the method is real-time and valid, the equivalent diameter of the rising bubble in the stagnant water is periodically changed, and the crests and troughs in the equivalent diameter curve appear alternately. The bubble behavior characteristics as well as the spiral amplitude are affected by the orifice diameter and the gas volume flow.

  5. Research situation and development trend of the binocular stereo vision system

    Science.gov (United States)

    Wang, Tonghao; Liu, Bingqi; Wang, Ying; Chen, Yichao

    2017-05-01

    Since the 21st century, with the development of the computer and signal processing technology, a new comprehensive subject that called computer vision was generated. Computer vision covers a wide range of knowledge, which includes physics, mathematics, biology, computer technology and other arts subjects. It contains much content, and becomes more and more powerful, not only can realize the function of the human eye "see", also can realize the human eyes cannot. In recent years, binocular stereo vision which is a main branch of the computer vision has become the focus of the research in the field of the computer vision. In this paper, the binocular stereo vision system, the development of present situation and application at home and abroad are summarized. With the current problems of the binocular stereo vision system, his own opinions are given. Furthermore, a prospective view of the future application and development of this technology are prospected.

  6. An assembly system based on industrial robot with binocular stereo vision

    Science.gov (United States)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  7. The wavelet transform and the suppression theory of binocular vision for stereo image compression

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, W.D. Jr [Argonne National Lab., IL (United States); Kenyon, R.V. [Illinois Univ., Chicago, IL (United States)

    1996-08-01

    In this paper a method for compression of stereo images. The proposed scheme is a frequency domain approach based on the suppression theory of binocular vision. By using the information in the frequency domain, complex disparity estimation techniques can be avoided. The wavelet transform is used to obtain a multiresolution analysis of the stereo pair by which the subbands convey the necessary frequency domain information.

  8. A Real-Time Range Finding System with Binocular Stereo Vision

    Directory of Open Access Journals (Sweden)

    Xiao-Bo Lai

    2012-05-01

    Full Text Available To acquire range information for mobile robots, a TMS320DM642 DSP-based range finding system with binocular stereo vision is proposed. Firstly, paired images of the target are captured and a Gaussian filter, as well as improved Sobel kernels, are achieved. Secondly, a feature-based local stereo matching algorithm is performed so that the space location of the target can be determined. Finally, in order to improve the reliability and robustness of the stereo matching algorithm under complex conditions, the confidence filter and the left-right consistency filter are investigated to eliminate the mismatching points. In addition, the range finding algorithm is implemented in the DSP/BIOS operating system to gain real-time control. Experimental results show that the average accuracy of range finding is more than 99% for measuring single-point distances equal to 120cm in the simple scenario and the algorithm takes about 39ms for ranging a time in a complex scenario. The effectivity, as well as the feasibility, of the proposed range finding system are verified.

  9. Effective Data-Driven Calibration for a Galvanometric Laser Scanning System Using Binocular Stereo Vision.

    Science.gov (United States)

    Tu, Junchao; Zhang, Liyan

    2018-01-12

    A new solution to the problem of galvanometric laser scanning (GLS) system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM). By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.

  10. Effective Data-Driven Calibration for a Galvanometric Laser Scanning System Using Binocular Stereo Vision

    Directory of Open Access Journals (Sweden)

    Junchao Tu

    2018-01-01

    Full Text Available A new solution to the problem of galvanometric laser scanning (GLS system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM. By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.

  11. A Technique for Binocular Stereo Vision System Calibration by the Nonlinear Optimization and Calibration Points with Accurate Coordinates

    International Nuclear Information System (INIS)

    Chen, H; Ye, D; Che, R S; Chen, G

    2006-01-01

    With the increasing need for higher accuracy measurement in computer vision, the precision of camera calibration is a more important factor. The objective of stereo camera calibration is to estimate the intrinsic and extrinsic parameters of each camera. We presented a high-accurate technique to calibrate binocular stereo vision system having been mounted the locations and attitudes, which was realized by combining nonlinear optimization method with accurate calibration points. The calibration points with accurate coordinates, were formed by an infrared LED moved with three-dimensional coordinate measurement machine, which can ensure indeterminacy of measurement is 1/30000. By using bilinear interpolation square-gray weighted centroid location algorithm, the imaging centers of the calibration points can be accurately determined. The accuracy of the calibration is measured in terms of the accuracy in the reconstructing calibration points through triangulation, the mean distance between reconstructing point and given calibration point is 0.039mm. The technique can satisfy the goals of measurement and camera accurate calibration

  12. Obstacle Detection using Binocular Stereo Vision in Trajectory Planning for Quadcopter Navigation

    Science.gov (United States)

    Bugayong, Albert; Ramos, Manuel, Jr.

    2018-02-01

    Quadcopters are one of the most versatile unmanned aerial vehicles due to its vertical take-off and landing as well as hovering capabilities. This research uses the Sum of Absolute Differences (SAD) block matching algorithm for stereo vision. A complementary filter was used in sensor fusion to combine obtained quadcopter orientation data from the accelerometer and the gyroscope. PID control was implemented for the motor control and VFH+ algorithm was implemented for trajectory planning. Results show that the quadcopter was able to consistently actuate itself in the roll, yaw and z-axis during obstacle avoidance but was however found to be inconsistent in the pitch axis during forward and backward maneuvers due to the significant noise present in the pitch axis angle outputs compared to the roll and yaw axes.

  13. Parallel algorithm for dominant points correspondences in robot binocular stereo vision

    Science.gov (United States)

    Al-Tammami, A.; Singh, B.

    1993-01-01

    This paper presents an algorithm to find the correspondences of points representing dominant feature in robot stereo vision. The algorithm consists of two main steps: dominant point extraction and dominant point matching. In the feature extraction phase, the algorithm utilizes the widely used Moravec Interest Operator and two other operators: the Prewitt Operator and a new operator called Gradient Angle Variance Operator. The Interest Operator in the Moravec algorithm was used to exclude featureless areas and simple edges which are oriented in the vertical, horizontal, and two diagonals. It was incorrectly detecting points on edges which are not on the four main directions (vertical, horizontal, and two diagonals). The new algorithm uses the Prewitt operator to exclude featureless areas, so that the Interest Operator is applied only on the edges to exclude simple edges and to leave interesting points. This modification speeds-up the extraction process by approximately 5 times. The Gradient Angle Variance (GAV), an operator which calculates the variance of the gradient angle in a window around the point under concern, is then applied on the interesting points to exclude the redundant ones and leave the actual dominant ones. The matching phase is performed after the extraction of the dominant points in both stereo images. The matching starts with dominant points in the left image and does a local search, looking for corresponding dominant points in the right image. The search is geometrically constrained the epipolar line of the parallel-axes stereo geometry and the maximum disparity of the application environment. If one dominant point in the right image lies in the search areas, then it is the corresponding point of the reference dominant point in the left image. A parameter provided by the GAV is thresholded and used as a rough similarity measure to select the corresponding dominant point if there is more than one point the search area. The correlation is used as

  14. Amblyopia and Binocular Vision

    Science.gov (United States)

    Birch, Eileen E.

    2012-01-01

    Amblyopia is the most common cause of monocular visual loss in children, affecting 1.3% to 3.6% of children. Current treatments are effective in reducing the visual acuity deficit but many amblyopic individuals are left with residual visual acuity deficits, ocular motor abnormalities, deficient fine motor skills, and risk for recurrent amblyopia. Using a combination of psychophysical, electrophysiological, imaging, risk factor analysis, and fine motor skill assessment, the primary role of binocular dysfunction in the genesis of amblyopia and the constellation of visual and motor deficits that accompany the visual acuity deficit has been identified. These findings motivated us to evaluate a new, binocular approach to amblyopia treatment with the goals of reducing or eliminating residual and recurrent amblyopia and of improving the deficient ocular motor function and fine motor skills that accompany amblyopia. PMID:23201436

  15. Assessing the binocular advantage in aided vision.

    Science.gov (United States)

    Harrington, Lawrence K; McIntire, John P; Hopper, Darrel G

    2014-09-01

    Advances in microsensors, microprocessors, and microdisplays are creating new opportunities for improving vision in degraded environments through the use of head-mounted displays. Initially, the cutting-edge technology used in these new displays will be expensive. Inevitably, the cost of providing the additional sensor and processing required to support binocularity brings the value of binocularity into question. Several assessments comparing binocular, binocular, and monocular head-mounted displays for aided vision have concluded that the additional performance, if any, provided by binocular head-mounted displays does not justify the cost. The selection of a biocular [corrected] display for use in the F-35 is a current example of this recurring decision process. It is possible that the human binocularity advantage does not carry over to the aided vision application, but more likely the experimental approaches used in the past have been too coarse to measure its subtle but important benefits. Evaluating the value of binocularity in aided vision applications requires an understanding of the characteristics of both human vision and head-mounted displays. With this understanding, the value of binocularity in aided vision can be estimated and experimental evidence can be collected to confirm or reject the presumed binocular advantage, enabling improved decisions in aided vision system design. This paper describes four computational approaches-geometry of stereopsis, modulation transfer function area for stereopsis, probability summation, and binocular summation-that may be useful in quantifying the advantage of binocularity in aided vision.

  16. The research of binocular vision ranging system based on LabVIEW

    Science.gov (United States)

    Li, Shikuan; Yang, Xu

    2017-10-01

    Based on the study of the principle of binocular parallax ranging, a binocular vision ranging system is designed and built. The stereo matching algorithm is realized by LabVIEW software. The camera calibration and distance measurement are completed. The error analysis shows that the system fast, effective, can be used in the corresponding industrial occasions.

  17. Colour-grapheme synaesthesia affects binocular vision

    Directory of Open Access Journals (Sweden)

    Chris L.E. Paffen

    2011-11-01

    Full Text Available In colour-grapheme synaesthesia, non-coloured graphemes are perceived as being inherently coloured. In recent years, it has become evident that synaesthesia-inducing graphemes can affect visual processing in a manner comparable to real, physical colours. Here, we exploit the phenomenon of binocular rivalry in which incompatible images presented dichoptically compete for conscious expression. Importantly, the competition only arises if the two images are sufficiently different; if the difference between the images is small, the images will fuse into a single mixed percept. We show that achromatic graphemes that induce synaesthetic colour percepts evoke binocular rivalry, while without the synaesthetic percept, they do not. That is, compared to achromatically perceived graphemes, synaesthesia-inducing graphemes increase the predominance of binocular rivalry over binocular fusion. This finding shows that the synaesthetic colour experience can provide the conditions for evoking binocular rivalry, much like stimulus features that induce rivalry in normal vision.

  18. Obstacle detection by stereo vision of fast correlation matching

    International Nuclear Information System (INIS)

    Jeon, Seung Hoon; Kim, Byung Kook

    1997-01-01

    Mobile robot navigation needs acquiring positions of obstacles in real time. A common method for performing this sensing is through stereo vision. In this paper, indoor images are acquired by binocular vision, which contains various shapes of obstacles. From these stereo image data, in order to obtain distances to obstacles, we must deal with the correspondence problem, or get the region in the other image corresponding to the projection of the same surface region. We present an improved correlation matching method enhancing the speed of arbitrary obstacle detection. The results are faster, simple matching, robustness to noise, and improvement of precision. Experimental results under actual surroundings are presented to reveal the performance. (author)

  19. Binocular Vision and the Stroop Test.

    Science.gov (United States)

    Daniel, François; Kapoula, Zoï

    2016-02-01

    Recent studies report a link between optometric results, learning disabilities, and problems in reading. This study examines the correlations between optometric tests of binocular vision, namely, of vergence and accommodation, reading speed, and cognitive executive functions as measured by the Stroop test. Fifty-one students (mean age, 20.43 ± 1.25 years) were given a complete eye examination. They then performed the reading test L'Alouette and the Stroop interference test at their usual reading distance. Criteria for selection were the absence of significant refractive uncorrected error, strabismus, amblyopia, color vision defects, and other neurologic findings. The results show a correlation between positive fusional vergences (PFVs) at near distance and the interference effect (IE) in the Stroop test: the higher the PFV value is, the less the IE. Furthermore, the subgroup of 11 students presenting convergence insufficiency, according to Scheiman and Wick criteria (2002), showed a significantly higher IE during the Stroop test than the other students (N = 18) who had normal binocular vision without symptoms at near. Importantly, there is no correlation between reading speed and PFV either for the entire sample or for the subgroups. These results suggest for the first time a link between convergence capacity and the interference score in the Stroop test. Such a link is attributable to the fact that vergence control and cognitive functions mobilize the same cortical areas, for example, parietofrontal areas. The results are in favor of our hypothesis that vergence is a vector of attentional and cognitive functions.

  20. Surrounding Moving Obstacle Detection for Autonomous Driving Using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Hao Sun

    2013-06-01

    Full Text Available Detection and tracking surrounding moving obstacles such as vehicles and pedestrians are crucial for the safety of mobile robotics and autonomous vehicles. This is especially the case in urban driving scenarios. This paper presents a novel framework for surrounding moving obstacles detection using binocular stereo vision. The contributions of our work are threefold. Firstly, a multiview feature matching scheme is presented for simultaneous stereo correspondence and motion correspondence searching. Secondly, the multiview geometry constraint derived from the relative camera positions in pairs of consecutive stereo views is exploited for surrounding moving obstacles detection. Thirdly, an adaptive particle filter is proposed for tracking of multiple moving obstacles in surrounding areas. Experimental results from real-world driving sequences demonstrate the effectiveness and robustness of the proposed framework.

  1. Interaction of algorithm and implementation for analog VLSI stereo vision

    Science.gov (United States)

    Hakkarainen, J. M.; Little, James J.; Lee, Hae-Seung; Wyatt, John L., Jr.

    1991-07-01

    Design of a high-speed stereo vision system in analog VLSI technology is reported. The goal is to determine how the advantages of analog VLSI--small area, high speed, and low power-- can be exploited, and how the effects of its principal disadvantages--limited accuracy, inflexibility, and lack of storage capacity--can be minimized. Three stereo algorithms are considered, and a simulation study is presented to examine details of the interaction between algorithm and analog VLSI implementation. The Marr-Poggio-Drumheller algorithm is shown to be best suited for analog VLSI implementation. A CCD/CMOS stereo system implementation is proposed, capable of operation at 6000 image frame pairs per second for 48 X 48 images, and faster than frame rate operation on 256 X 256 binocular image pairs.

  2. Symptomatology associated with accommodative and binocular vision anomalies

    Directory of Open Access Journals (Sweden)

    Ángel García-Muñoz

    2014-10-01

    Conclusions: There is a wide disparity of symptoms related to accommodative and binocular dysfunctions in the scientific literature, most of which are associated with near vision and binocular dysfunctions. The only psychometrically validated questionnaires that we found (n=3 were related to convergence insufficiency and to visual dysfunctions in general and there no specific questionnaires for other anomalies.

  3. Pengukuran Jarak Berbasiskan Stereo Vision

    Directory of Open Access Journals (Sweden)

    Iman Herwidiana Kartowisastro

    2010-12-01

    Full Text Available Measuring distance from an object can be conducted in a variety of ways, including by making use of distance measuring sensors such as ultrasonic sensors, or using the approach based vision system. This last mode has advantages in terms of flexibility, namely a monitored object has relatively no restrictions characteristic of the materials used at the same time it also has its own difficulties associated with object orientation and state of the room where the object is located. To overcome this problem, so this study examines the possibility of using stereo vision to measure the distance to an object. The system was developed starting from image extraction, information extraction characteristics of the objects contained in the image and visual distance measurement process with 2 separate cameras placed in a distance of 70 cm. The measurement object can be in the range of 50 cm - 130 cm with a percentage error of 5:53%. Lighting conditions (homogeneity and intensity has a great influence on the accuracy of the measurement results. 

  4. Stereo vision with distance and gradient recognition

    Science.gov (United States)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  5. GPU-based real-time trinocular stereo vision

    Science.gov (United States)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  6. Ptolemy's contributions to the geometry of binocular vision.

    Science.gov (United States)

    Howard, I P; Wade, N J

    1996-01-01

    Ptolemy's Optics which was written in about the year 150 AD contains an account of the geometry of binocular vision which has been almost totally neglected in the vision literature. An English translation of the relevant passages from the Latin text in Lejeune (1956) is presented together with commentaries and a brief introduction.

  7. Loewald's "binocular vision" and the art of analysis.

    Science.gov (United States)

    Miller, J David

    2008-12-01

    In his 1988 monograph on sublimation, Hans Loewald describes the process as a transformation of the drives aimed at re-creating what he presumes to be the subjective experience of infantile attachment. To describe this experience, he invokes a state of mind that he calls "binocular vision." He maintains that this mental state may arise not only in activities usually associated with sublimation, such as the creation and enjoyment of art, but in all forms of sublimation, including effective psychoanalysis. Because Loewald's discussion is largely theoretical, it does not convey how the concept of binocular vision may inform clinical technique and interdisciplinary study. A comparative application of his theory to psychoanalytic process and to the viewer's response to art enables one to grasp binocular vision as a common aspect of both. It emerges as a useful model with which to conceptualize and integrate the ambiguous reality of analytic experience.

  8. Refractive and binocular vision status of optometry students, Ghana ...

    African Journals Online (AJOL)

    To investigate the refractive and non-strabismic binocular vision status of Optometry students in University of Cape Coast, Ghana and to establish any associations between these conditions. A cross sectional study of 105 Optometry students were taken through a comprehensive optometric examination to investigate the ...

  9. [On a binocular vision testing in concomitant strabismus].

    Science.gov (United States)

    Seleznev, A V; Vakurin, E A; Kashchenko, T P

    2011-01-01

    Character of vision in 105 children with strabismus (with regular eye position) was tested using four dot test on different distance (5.0, 2.5, 1.0 m) and "Phorbis" device comprising phoropter and a set of light filters, that let us perform examination in the conditions of colour, polaroid and bitmapped division of visual fields. Stereoscopic vision was examined using original method based on anaglyph haploscopy. Binocular vision in strabismus was found to become more frequent as the distance and dissociating effect of light-filters decrease, and turned to be maximal in near testing in conditions of bitmapped haploscopy. Visual stereoscopic acuity in children with concomitant strabismus even on reaching regular eye position and binocular vision is significantly lower compared with healthy children of the same age.

  10. Application of binocular vision system in nuclear power plant

    International Nuclear Information System (INIS)

    Chen Yulong; He Xuhong; Zhao Bingquan

    2002-01-01

    Based on stereo disparity, a vision system of locating three-dimensional position is described. The input device of the vision system is a digital camera. And special targets are used to improve the efficiency and accuracy of computer analysis. It provides a reliable and practical computer locating system for equipment maintenance in nuclear power plant

  11. Symptomatology associated with accommodative and binocular vision anomalies.

    Science.gov (United States)

    García-Muñoz, Ángel; Carbonell-Bonete, Stela; Cacho-Martínez, Pilar

    2014-01-01

    To determine the symptoms associated with accommodative and non-strabismic binocular dysfunctions and to assess the methods used to obtain the subjects' symptoms. We conducted a scoping review of articles published between 1988 and 2012 that analysed any aspect of the symptomatology associated with accommodative and non-strabismic binocular dysfunctions. The literature search was performed in Medline (PubMed), CINAHL, PsycINFO and FRANCIS. A total of 657 articles were identified, and 56 met the inclusion criteria. We found 267 different ways of naming the symptoms related to these anomalies, which we grouped into 34 symptom categories. Of the 56 studies, 35 employed questionnaires and 21 obtained the symptoms from clinical histories. We found 11 questionnaires, of which only 3 had been validated: the convergence insufficiency symptom survey (CISS V-15) and CIRS parent version, both specific for convergence insufficiency, and the Conlon survey, developed for visual anomalies in general. The most widely used questionnaire (21 studies) was the CISS V-15. Of the 34 categories of symptoms, the most frequently mentioned were: headache, blurred vision, diplopia, visual fatigue, and movement or flicker of words at near vision, which were fundamentally related to near vision and binocular anomalies. There is a wide disparity of symptoms related to accommodative and binocular dysfunctions in the scientific literature, most of which are associated with near vision and binocular dysfunctions. The only psychometrically validated questionnaires that we found (n=3) were related to convergence insufficiency and to visual dysfunctions in general and there no specific questionnaires for other anomalies. Copyright © 2014. Published by Elsevier Espana.

  12. Restoration of degraded images using stereo vision

    Science.gov (United States)

    Hernández-Beltrán, José Enrique; Díaz-Ramírez, Victor H.; Juarez-Salazar, Rigoberto

    2017-08-01

    Image restoration consists in retrieving an original image by processing captured images of a scene which are degraded by noise, blurring or optical scattering. Commonly restoration algorithms utilize a single monocular image of the observed scene by assuming a known degradation model. In this approach, valuable information of the three dimensional scene is discarded. This work presents a locally-adaptive algorithm for image restoration by employing stereo vision. The proposed algorithm utilizes information of a three-dimensional scene as well as local image statistics to improve the quality of a single restored image by processing pairs of stereo images. Computer simulations results obtained with the proposed algorithm are analyzed and discussed in terms of objective metrics by processing stereo images degraded by optical scattering.

  13. A Novel Binocular Vision System for Wearable Devices.

    Science.gov (United States)

    Zhai, Haitian; Li, Hui; Bai, Yicheng; Jia, Wenyan; Sun, Mingui

    2014-04-25

    We present a novel binocular imaging system for wearable devices incorporating the biology knowledge of the human eyes. Unlike the camera system in smartphones, two fish-eye lenses with a larger angle of view are used, the visual field of the new system is larger, and the central resolution of output images is higher. This design leads to more effective image acquisition, facilitating computer vision tasks such as target recognition, navigation and object tracking.

  14. The contribution of stereo vision to the control of braking.

    Science.gov (United States)

    Tijtgat, Pieter; Mazyn, Liesbeth; De Laey, Christophe; Lenoir, Matthieu

    2008-03-01

    In this study the contribution of stereo vision to the control of braking in front of a stationary target vehicle was investigated. Participants with normal (StereoN) and weak (StereoW) stereo vision drove a go-cart along a linear track towards a stationary vehicle. They could start braking from a distance of 4, 7, or 10m from the vehicle. Deceleration patterns were measured by means of a laser. A lack of stereo vision was associated with an earlier onset of braking, but the duration of the braking manoeuvre was similar. During the deceleration, the time of peak deceleration occurred earlier in drivers with weak stereo vision. Stopping distance was greater in those lacking in stereo vision. A lack of stereo vision was associated with a more prudent brake behaviour, in which the driver took into account a larger safety margin. This compensation might be caused either by an unconscious adaptation of the human perceptuo-motor system, or by a systematic underestimation of distance remaining due to the lack of stereo vision. In general, a lack of stereo vision did not seem to increase the risk of rear-end collisions.

  15. Efficacy of vision therapy in children with learning disability and associated binocular vision anomalies

    Directory of Open Access Journals (Sweden)

    Jameel Rizwana Hussaindeen

    2018-01-01

    Conclusion: Children with specific learning disorders have a high frequency of binocular vision disorders and vision therapy plays a significant role in improving the BV parameters. Children with SLD should be screened for BV anomalies as it could potentially be an added hindrance to the reading difficulty in this special population.

  16. Computer-enhanced stereoscopic vision in a head-mounted operating binocular

    International Nuclear Information System (INIS)

    Birkfellner, Wolfgang; Figl, Michael; Matula, Christian; Hummel, Johann; Hanel, Rudolf; Imhof, Herwig; Wanschitz, Felix; Wagner, Arne; Watzinger, Franz; Bergmann, Helmar

    2003-01-01

    Based on the Varioscope, a commercially available head-mounted operating binocular, we have developed the Varioscope AR, a see through head-mounted display (HMD) for augmented reality visualization that seamlessly fits into the infrastructure of a surgical navigation system. We have assessed the extent to which stereoscopic visualization improves target localization in computer-aided surgery in a phantom study. In order to quantify the depth perception of a user aiming at a given target, we have designed a phantom simulating typical clinical situations in skull base surgery. Sixteen steel spheres were fixed at the base of a bony skull, and several typical craniotomies were applied. After having taken CT scans, the skull was filled with opaque jelly in order to simulate brain tissue. The positions of the spheres were registered using VISIT, a system for computer-aided surgical navigation. Then attempts were made to locate the steel spheres with a bayonet probe through the craniotomies using VISIT and the Varioscope AR as a stereoscopic display device. Localization of targets 4 mm in diameter using stereoscopic vision and additional visual cues indicating target proximity had a success rate (defined as a first-trial hit rate) of 87.5%. Using monoscopic vision and target proximity indication, the success rate was found to be 66.6%. Omission of visual hints on reaching a target yielded a success rate of 79.2% in the stereo case and 56.25% with monoscopic vision. Time requirements for localizing all 16 targets ranged from 7.5 min (stereo, with proximity cues) to 10 min (mono, without proximity cues). Navigation error is primarily governed by the accuracy of registration in the navigation system, whereas the HMD does not appear to influence localization significantly. We conclude that stereo vision is a valuable tool in augmented reality guided interventions. (note)

  17. Efficacy of vision therapy in children with learning disability and associated binocular vision anomalies.

    Science.gov (United States)

    Hussaindeen, Jameel Rizwana; Shah, Prerana; Ramani, Krishna Kumar; Ramanujan, Lalitha

    To report the frequency of binocular vision (BV) anomalies in children with specific learning disorders (SLD) and to assess the efficacy of vision therapy (VT) in children with a non-strabismic binocular vision anomaly (NSBVA). The study was carried out at a centre for learning disability (LD). Comprehensive eye examination and binocular vision assessment was carried out for 94 children (mean (SD) age: 15 (2.2) years) diagnosed with specific learning disorder. BV assessment was done for children with best corrected visual acuity of ≥6/9 - N6, cooperative for examination and free from any ocular pathology. For children with a diagnosis of NSBVA (n=46), 24 children were randomized to VT and no intervention was provided to the other 22 children who served as experimental controls. At the end of 10 sessions of vision therapy, BV assessment was performed for both the intervention and non-intervention groups. Binocular vision anomalies were found in 59 children (62.8%) among which 22% (n=13) had strabismic binocular vision anomalies (SBVA) and 78% (n=46) had a NSBVA. Accommodative infacility (AIF) was the commonest of the NSBVA and found in 67%, followed by convergence insufficiency (CI) in 25%. Post-vision therapy, the intervention group showed significant improvement in all the BV parameters (Wilcoxon signed rank test, pvision disorders and vision therapy plays a significant role in improving the BV parameters. Children with SLD should be screened for BV anomalies as it could potentially be an added hindrance to the reading difficulty in this special population. Copyright © 2017 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  18. Stereo vision based automated grasp planning

    International Nuclear Information System (INIS)

    Wilhelmsen, K.; Huber, L.; Silva, D.; Grasz, E.; Cadapan, L.

    1995-02-01

    The Department of Energy has a need for treating existing nuclear waste. Hazardous waste stored in old warehouses needs to be sorted and treated to meet environmental regulations. Lawrence Livermore National Laboratory is currently experimenting with automated manipulations of unknown objects for sorting, treating, and detailed inspection. To accomplish these tasks, three existing technologies were expanded to meet the increasing requirements. First, a binocular vision range sensor was combined with a surface modeling system to make virtual images of unknown objects. Then, using the surface model information, stable grasp of the unknown shaped objects were planned algorithmically utilizing a limited set of robotic grippers. This paper is an expansion of previous work and will discuss the grasp planning algorithm

  19. Clustered features for use in stereo vision SLAM

    CSIR Research Space (South Africa)

    Joubert, D

    2010-07-01

    Full Text Available SLAM, or simultaneous localization and mapping, is a key component in the development of truly independent robots. Vision-based SLAM utilising stereo vision is a promising approach to SLAM but it is computationally expensive and difficult...

  20. Precision calibration method for binocular vision measurement systems based on arbitrary translations and 3D-connection information

    International Nuclear Information System (INIS)

    Yang, Jinghao; Jia, Zhenyuan; Liu, Wei; Fan, Chaonan; Xu, Pengtao; Wang, Fuji; Liu, Yang

    2016-01-01

    Binocular vision systems play an important role in computer vision, and high-precision system calibration is a necessary and indispensable process. In this paper, an improved calibration method for binocular stereo vision measurement systems based on arbitrary translations and 3D-connection information is proposed. First, a new method for calibrating the intrinsic parameters of binocular vision system based on two translations with an arbitrary angle difference is presented, which reduces the effect of the deviation of the motion actuator on calibration accuracy. This method is simpler and more accurate than existing active-vision calibration methods and can provide a better initial value for the determination of extrinsic parameters. Second, a 3D-connection calibration and optimization method is developed that links the information of the calibration target in different positions, further improving the accuracy of the system calibration. Calibration experiments show that the calibration error can be reduced to 0.09%, outperforming traditional methods for the experiments of this study. (paper)

  1. Object recognition with stereo vision and geometric hashing

    NARCIS (Netherlands)

    van Dijck, H.A.L.; van der Heijden, Ferdinand

    In this paper we demonstrate a method to recognize 3D objects and to estimate their pose. For that purpose we use a combination of stereo vision and geometric hashing. Stereo vision is used to generate a large number of 3D low level features, of which many are spurious because at that stage of the

  2. Cooperative and asynchronous stereo vision for dynamic vision sensors

    Science.gov (United States)

    Piatkowska, E.; Belbachir, A. N.; Gelautz, M.

    2014-05-01

    Dynamic vision sensors (DVSs) encode visual input as a stream of events generated upon relative light intensity changes in the scene. These sensors have the advantage of allowing simultaneously high temporal resolution (better than 10 µs) and wide dynamic range (>120 dB) at sparse data representation, which is not possible with clocked vision sensors. In this paper, we focus on the task of stereo reconstruction. The spatiotemporal and asynchronous aspects of data provided by the sensor impose a different stereo reconstruction approach from the one applied for synchronous frame-based cameras. We propose to model the event-driven stereo matching by a cooperative network (Marr and Poggio 1976 Science 194 283-7). The history of the recent activity in the scene is stored in the network, which serves as spatiotemporal context used in disparity calculation for each incoming event. The network constantly evolves in time, as events are generated. In our work, not only the spatiotemporal aspect of the data is preserved but also the matching is performed asynchronously. The results of the experiments prove that the proposed approach is well adapted for DVS data and can be successfully used for disparity calculation.

  3. The Effects of Sports Vision Training on Binocular Vision Function in Female University Athletes.

    Science.gov (United States)

    Zwierko, Teresa; Puchalska-Niedbał, Lidia; Krzepota, Justyna; Markiewicz, Mikołaj; Woźniak, Jarosław; Lubiński, Wojciech

    2015-12-22

    Binocular vision is the most important visual cue for spatial orientation in many sports. In this study, we investigated how binocular vision was influenced by an eye training program that may be used to improve individual's oculomotor function. The experiment involved twenty-four female student athletes from team ball sports (soccer, basketball, handball). After an initial testing session, 12 participants were randomly allocated to the experimental group. Optometric investigation which included synoptophore testing and a test of dissociated horizontal phoria based on the Maddox method was performed three times: before the experiment, after eight weeks of eye training (3 times a week for 20 minutes), and four weeks after the experiment was terminated. Eye exercise methodology was based on orthoptic, sport and psychological aspects of performance. The phoria screening examination showed that exophoria was the most frequent disorder of binocular vision. Low fusional vergence range was also observed. Following the training period, 3 of the 6 oculomotor variables improved. The greatest effect was observed in near dissociated phoria (χ(2) =14.56, p=0.001 for the right eye; χ(2) =14.757, p=0.001 for the left eye) and fusional convergence (χ(2) =8.522, p=0.014). The results of the retention test conducted four weeks after the experiment confirmed the effectiveness of the vision training program. The results of the study suggest that binocular functions are trainable and can be improved by means of appropriate visual training.

  4. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    OpenAIRE

    Flavio Roberti; Juan Marcos Toibero; Carlos Soria; Raquel Frizera Vassallo; Ricardo Carelli

    2009-01-01

    This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras) for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimen...

  5. Maximum reading speed and binocular summation in patients with central vision loss.

    Science.gov (United States)

    Tarita-Nistor, Luminita; Brent, Michael H; Markowitz, Samuel N; Steinbach, Martin J; González, Esther G

    2013-10-01

    Visual acuity is a poor predictor of the maximum reading speed of patients with central vision loss. This study examines the effects of binocular summation of acuity on the maximum reading speed of these patients. Prospective, observational case series. Twenty patients with central vision loss participated. Maximum reading speed was measured binocularly using the MNREAD acuity charts. Monocular and binocular acuities were measured with the Early Treatment Diabetic Retinopathy Study (ETDRS) chart. Binocular summation was evaluated with a binocular ratio (BR) calculated as the ratio between the acuity of the better eye to binocular acuity. Fixation stability and preferred retinal locus (PRL) distance from the former fovea were evaluated with the MP-1 microperimetre. Six patients experienced acuity summation (BR > 1.05), 5 experienced acuity inhibition (BR reading speed was significantly slower (p reading speed for the overall sample (r[18] = 0.49, p = 0.03). BR together with PRL distance from the former fovea in the better eye explained 45% of the variance in maximum reading speed. Binocular summation of acuity rather than visual acuity alone affects maximum reading speed of patients with central vision loss. Patients with binocular inhibition read significantly slower than those with binocular summation or equality. Assessment of binocular summation is important when devising reading rehabilitation techniques. © 2013 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.

  6. Origins of strabismus and loss of binocular vision

    Science.gov (United States)

    Bui Quoc, Emmanuel; Milleret, Chantal

    2014-01-01

    Strabismus is a frequent ocular disorder that develops early in life in humans. As a general rule, it is characterized by a misalignment of the visual axes which most often appears during the critical period of visual development. However other characteristics of strabismus may vary greatly among subjects, for example, being convergent or divergent, horizontal or vertical, with variable angles of deviation. Binocular vision may also vary greatly. Our main goal here is to develop the idea that such “polymorphy” reflects a wide variety in the possible origins of strabismus. We propose that strabismus must be considered as possibly resulting from abnormal genetic and/or acquired factors, anatomical and/or functional abnormalities, in the sensory and/or the motor systems, both peripherally and/or in the brain itself. We shall particularly develop the possible “central” origins of strabismus. Indeed, we are convinced that it is time now to open this “black box” in order to move forward. All of this will be developed on the basis of both presently available data in literature (including most recent data) and our own experience. Both data in biology and medicine will be referred to. Our conclusions will hopefully help ophthalmologists to better understand strabismus and to develop new therapeutic strategies in the future. Presently, physicians eliminate or limit the negative effects of such pathology both on the development of the visual system and visual perception through the use of optical correction and, in some cases, extraocular muscle surgery. To better circumscribe the problem of the origins of strabismus, including at a cerebral level, may improve its management, in particular with respect to binocular vision, through innovating tools by treating the pathology at the source. PMID:25309358

  7. Origins of strabismus and loss of binocular vision

    Directory of Open Access Journals (Sweden)

    Emmanuel eBui Quoc

    2014-09-01

    Full Text Available Strabismus is a frequent ocular disorder that develops early in life in humans. As a general rule, it is characterized by a misalignment of the visual axes which most often appears during the critical period of visual development. However other characteristics of strabismus may vary greatly among subjects, for example, being convergent or divergent, horizontal or vertical, with variable angles of deviation. Binocular vision may also vary greatly. Our main goal here is to develop the idea that such polymorphy reflects a wide variety in the possible origins of strabismus. We propose that strabismus must be considered as possibly resulting from abnormal genetic and/or acquired factors, anatomical and/or functional abnormalities, in the sensory and/or the motor systems, both peripherally and/or in the brain itself. We shall particularly develop the possible central origins of strabismus. Indeed, we are convinced that it is time now to open this black box in order to move forward. All of this will be developed on the basis of both presently available data in literature (including most recent data and our own experience. Both data in medicine and biology will be referred to. Our conclusions will hopefully help ophthalmologists to better understand strabismus and to develop new therapeutic strategies in the future. Presently, physicians eliminate or limit the negative effects of such pathology both on the development of the visual system and visual perception through the use of optical correction and, in some cases, extraocular muscle surgery. To better circumscribe the problem of the origins of strabismus, including at a cerebral level, may improve its management, in particular with respect to binocular vision, through innovating tools by treating the pathology at the source.

  8. An obstacle detection system using binocular stereo fisheye lenses for planetary rover navigation

    Science.gov (United States)

    Liu, L.; Jia, J.; Li, L.

    In this paper we present an implementation of an obstacle detection system using binocular stereo fisheye lenses for planetary rover navigation The fisheye lenses can improve image acquisition efficiency and handle minimal clearance recovery problem because they provide a large field of view However the fisheye lens introduces significant distortion in the image and this will make it much more difficult to find a one-to-one correspondence In addition we have to improve the system accuracy and efficiency for robot navigation To compute dense depth maps accurately in real time the following five key issues are considered 1 using lookup tables for a tradeoff between time and space in fisheye distortion correction and correspondence matching 2 using an improved incremental calculation scheme for algorithmic optimization 3 multimedia instruction set MMX implementation 4 consistency check to remove wrong stereo matching problems suffering from occlusions or mismatches 5 constraints of the recovery space To realize obstacle detection robustly we use the following three steps 1 extracting the ground plane parameters using Randomized Hough Transform 2 filtering the ground and background 3 locating the obstacles by using connected region detection Experimental results show the system can run at 3 2fps in 2 0GHz PC with 640X480 pixels

  9. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    Directory of Open Access Journals (Sweden)

    Flavio Roberti

    2010-02-01

    Full Text Available This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimental results with real mobile robots are included to validate the proposed vision system.

  10. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    Directory of Open Access Journals (Sweden)

    Flavio Roberti

    2009-12-01

    Full Text Available This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimental results with real mobile robots are included to validate the proposed vision system.

  11. Viewing geometry determines the contribution of binocular vision to the online control of grasping.

    Science.gov (United States)

    Keefe, Bruce D; Watt, Simon J

    2017-12-01

    Binocular vision is often assumed to make a specific, critical contribution to online visual control of grasping by providing precise information about the separation between digits and object. This account overlooks the 'viewing geometry' typically encountered in grasping, however. Separation of hand and object is rarely aligned precisely with the line of sight (the visual depth dimension), and analysis of the raw signals suggests that, for most other viewing angles, binocular feedback is less precise than monocular feedback. Thus, online grasp control relying selectively on binocular feedback would not be robust to natural changes in viewing geometry. Alternatively, sensory integration theory suggests that different signals contribute according to their relative precision, in which case the role of binocular feedback should depend on viewing geometry, rather than being 'hard-wired'. We manipulated viewing geometry, and assessed the role of binocular feedback by measuring the effects on grasping of occluding one eye at movement onset. Loss of binocular feedback resulted in a significantly less extended final slow-movement phase when hand and object were separated primarily in the frontoparallel plane (where binocular information is relatively imprecise), compared to when they were separated primarily along the line of sight (where binocular information is relatively precise). Consistent with sensory integration theory, this suggests the role of binocular (and monocular) vision in online grasp control is not a fixed, 'architectural' property of the visuo-motor system, but arises instead from the interaction of viewer and situation, allowing robust online control across natural variations in viewing geometry.

  12. Development of 3D online contact measurement system for intelligent manufacturing based on stereo vision

    Science.gov (United States)

    Li, Peng; Chong, Wenyan; Ma, Yongjun

    2017-10-01

    In order to avoid shortcomings of low efficiency and restricted measuring range exsited in traditional 3D on-line contact measurement method for workpiece size, the development of a novel 3D contact measurement system is introduced, which is designed for intelligent manufacturing based on stereo vision. The developed contact measurement system is characterized with an intergarted use of a handy probe, a binocular stereo vision system, and advanced measurement software.The handy probe consists of six track markers, a touch probe and the associated elcetronics. In the process of contact measurement, the hand probe can be located by the use of the stereo vision system and track markers, and 3D coordinates of a space point on the workpiece can be mearsured by calculating the tip position of a touch probe. With the flexibility of the hand probe, the orientation, range, density of the 3D contact measurenent can be adptable to different needs. Applications of the developed contact measurement system to high-precision measurement and rapid surface digitization are experimentally demonstrated.

  13. Pediatric vision screening using binocular retinal birefringencr scanning

    Science.gov (United States)

    Nassif, Deborah S.; Gramatikov, Boris; Guyton, David L.; Hunter, David G.

    2003-07-01

    Amblyopia, a leading cause of vision loss in childhood, is responsive to treatment if detected early in life. Risk factors for amblyopia, such as refractive error and strabismus, may be difficult to identify clinically in young children. Our laboratory has developed retinal birefringence scanning (RBS), in which a small spot of polarized light is scanned in a circle on the retina, and the returning light is measured for changes in polarization caused by the pattern of birefringent fibers that comprise the fovea. Binocular RBS (BRBS) detects the fixation of both eyes simultaneously and thus screens for strabismus, one of the risk factors of amblyopia. We have also developed a technique to automatically detect when the eye is in focus without measuring refractive error. This focus detection system utilizes a bull's eye photodetector optically conjugate to a point fixation source. Reflected light is focused back to the point source by the optical system of the eye, and if the subject focuses on the fixation source, the returning light will be focused on the detector. We have constructed a hand-held prototype combining BRBS and focus detection measurements in one quick (< 0.5 second) and accurate (theoretically detecting +/-1 of misalignment) measurement. This approach has the potential to reliably identify children at risk for amblyopia.

  14. Monocular and binocular development in children with albinism, infantile nystagmus syndrome, and normal vision.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke

    2013-12-01

    To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Interocular acuity differences and binocular summation ratios were compared between groups. Crowding ratios were calculated by dividing the single Landolt C decimal acuity with the crowded Landolt C decimal acuity mono- and binocularly. A linear regression analysis was conducted to investigate the contribution of 5 predictors to the monocular and binocular crowding ratio: nystagmus amplitude, nystagmus frequency, strabismus, astigmatism, and anisometropia. Crowding ratios were higher under mono- and binocular viewing conditions for children with infantile nystagmus syndrome than for children with normal vision. Children with albinism showed higher crowding ratios in their poorer eye and under binocular viewing conditions than children with normal vision. Children with albinism and children with infantile nystagmus syndrome showed larger interocular acuity differences than children with normal vision (0.1 logMAR in our clinical groups and 0.0 logMAR in children with normal vision). Binocular summation ratios did not differ between groups. Strabismus and nystagmus amplitude predicted the crowding ratio in the poorer eye (p = 0.015 and p = 0.005, respectively). The crowding ratio in the better eye showed a marginally significant relation with nystagmus frequency and depth of anisometropia (p = 0.082 and p = 0.070, respectively). The binocular crowding ratio was not predicted by any of the variables. Children with albinism and children with infantile nystagmus syndrome show larger interocular acuity differences than children with normal vision. Strabismus and nystagmus amplitude are significant predictors of the crowding ratio in the poorer eye.

  15. Calibration of Binocular Vision Sensors Based on Unknown-Sized Elliptical Stripe Images

    Directory of Open Access Journals (Sweden)

    Zhen Liu

    2017-12-01

    Full Text Available Most of the existing calibration methods for binocular stereo vision sensor (BSVS depend on a high-accuracy target with feature points that are difficult and costly to manufacture and. In complex light conditions, optical filters are used for BSVS, but they affect imaging quality. Hence, the use of a high-accuracy target with certain-sized feature points for calibration is not feasible under such complex conditions. To solve these problems, a calibration method based on unknown-sized elliptical stripe images is proposed. With known intrinsic parameters, the proposed method adopts the elliptical stripes located on the parallel planes as a medium to calibrate BSVS online. In comparison with the common calibration methods, the proposed method avoids utilizing high-accuracy target with certain-sized feature points. Therefore, the proposed method is not only easy to implement but is a realistic method for the calibration of BSVS with optical filter. Changing the size of elliptical curves projected on the target solves the difficulty of applying the proposed method in different fields of view and distances. Simulative and physical experiments are conducted to validate the efficiency of the proposed method. When the field of view is approximately 400 mm × 300 mm, the proposed method can reach a calibration accuracy of 0.03 mm, which is comparable with that of Zhang’s method.

  16. Passive Night Vision Sensor Comparison for Unmanned Ground Vehicle Stereo Vision Navigation

    Science.gov (United States)

    Owens, Ken; Matthies, Larry

    2000-01-01

    One goal of the "Demo III" unmanned ground vehicle program is to enable autonomous nighttime navigation at speeds of up to 10 m.p.h. To perform obstacle detection at night with stereo vision will require night vision cameras that produce adequate image quality for the driving speeds, vehicle dynamics, obstacle sizes, and scene conditions that will be encountered. This paper analyzes the suitability of four classes of night vision cameras (3-5 micrometer cooled FLIR, 8-12 micrometer cooled FLIR, 8-12 micrometer uncooled FLIR, and image intensifiers) for night stereo vision, using criteria based on stereo matching quality, image signal to noise ratio, motion blur and synchronization capability. We find that only cooled FLIRs will enable stereo vision performance that meets the goals of the Demo III program for nighttime autonomous mobility.

  17. Adaptive control of camera position for stereo vision

    Science.gov (United States)

    Crisman, Jill D.; Cleary, Michael E.

    1994-03-01

    A major problem in using two-camera stereo machine vision to perform real-world tasks, such as visual object tracking, is deciding where to position the cameras. Humans accomplish the analogous task by positioning their heads and eyes for optimal stereo effects. This paper describes recent work toward developing automated control strategies for camera motion in stereo machine vision systems for mobile robot navigation. Our goal is to achieve fast, reliable pursuit of a target while avoiding obstacles. Our strategy results in smooth, stable camera motion despite robot and target motion. Our algorithm has been shown to be successful at navigating a mobile robot, mediating visual target tracking and ultrasonic obstacle detection. The architecture, hardware, and simulation results are discussed.

  18. A comparative study of fast dense stereo vision algorithms

    NARCIS (Netherlands)

    Sunyoto, H.; Mark, W. van der; Gavrila, D.M.

    2004-01-01

    With recent hardware advances, real-time dense stereo vision becomes increasingly feasible for general-purpose processors. This has important benefits for the intelligent vehicles domain, alleviating object segmentation problems when sensing complex, cluttered traffic scenes. In this paper, we

  19. What is Binocular Disparity?

    Directory of Open Access Journals (Sweden)

    Joseph S Lappin

    2014-08-01

    Full Text Available What are the geometric primitives of binocular disparity? The Venetian blind effect and other converging lines of evidence indicate that stereo-scopic depth perception derives from disparities of higher-order structure in images of surfaces. Image structure entails spatial variations of in-tensity, texture, and motion, jointly structured by observed surfaces. The spatial structure of bin-ocular disparity corresponds to the spatial struc-ture of surfaces. Independent spatial coordinates are not necessary for stereoscopic vision. Stere-opsis is highly sensitive to structural disparities associated with local surface shape. Disparate positions on retinal anatomy are neither neces-sary nor sufficient for stereopsis.

  20. Research on 3D reconstruction measurement and parameter of cavitation bubble based on stereo vision

    Science.gov (United States)

    Li, Shengyong; Ai, Xiaochuan; Wu, Ronghua; Cao, Jing

    2017-02-01

    The problems caused by the cavitation bubble and caused many adverse effects on the ship propeller, hydraulic machinery and equipment. In order to research the production mechanism of cavitation bubble under different conditions, cavitation bubble zone parameter fine measurement and analysis technology is indispensable, this paper adopts a non-contact measurement method of optical autonomous construction of binocular stereo vision measurement system according to the characteristics of cavitation bubble, the texture features are not clear, transparent and difficult to obtain, 3D imaging measurement of cavitation bubble using composite dynamic lighting, and 3D reconstruction of cavitation bubble region and obtained the characteristics of more accurate parameters, test results show that the cavitation bubble characteristics of the fine technology can obtain and analyze cavitation bubble region and instability.

  1. A stereo vision method based on region segmentation

    International Nuclear Information System (INIS)

    Homma, K.; Fu, K.S.

    1984-01-01

    A stereo vision method based on segmented region information is presented in this paper. Regions that have uniform image properties are segmented on stereo images. The shapes of the regions are represented by chain codes. The weighted metrics between the region chain codes are calculated to explore the shape dissimilarities. From the minimum weight transformation of codes, partial shape matching can be found by adjusting the weights for code deletion, insertion and substitution. The partial shape matching gives stereo correspondences on the region contours even though the images have occlusion, segmentation noise and distortion. The depth interpolation is executed region by region by considering the occlusion. A depth image of a real indoor scene is extracted as an application example of this method

  2. ROS-based ground stereo vision detection: implementation and experiments.

    Science.gov (United States)

    Hu, Tianjiang; Zhao, Boxin; Tang, Dengqing; Zhang, Daibing; Kong, Weiwei; Shen, Lincheng

    This article concentrates on open-source implementation on flying object detection in cluttered scenes. It is of significance for ground stereo-aided autonomous landing of unmanned aerial vehicles. The ground stereo vision guidance system is presented with details on system architecture and workflow. The Chan-Vese detection algorithm is further considered and implemented in the robot operating systems (ROS) environment. A data-driven interactive scheme is developed to collect datasets for parameter tuning and performance evaluating. The flying vehicle outdoor experiments capture the stereo sequential images dataset and record the simultaneous data from pan-and-tilt unit, onboard sensors and differential GPS. Experimental results by using the collected dataset validate the effectiveness of the published ROS-based detection algorithm.

  3. Monocular and binocular development in children with albinism, infantile nystagmus syndrome, and normal vision

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.

    2013-01-01

    Abstract Background/aims: To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Methods: Interocular acuity

  4. Monocular and binocular development in children with albinism, infantile nystagmus syndrome and normal vision

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.

    2013-01-01

    Background/aims: To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Methods: Interocular acuity differences and

  5. Vision based flight procedure stereo display system

    Science.gov (United States)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  6. Modeling Visual Symptoms and Visual Skills to Measure Functional Binocular Vision

    Science.gov (United States)

    Powers, M. K.; Fisher, W. P., Jr.; Massof, R. W.

    2016-11-01

    Obtaining a clear image of the world depends on good eye coordination (“binocular vision”). Yet no standard exists by which to determine a threshold for good vs poor binocular vision, as exists for the eye chart and visual acuity. We asked whether data on the signs and symptoms related to binocular vision are sufficiently consistent with children's self-reported visual symptoms to substantiate a construct model of Functional Binocular Vision (FBV), and then whether that model can be used to aggregate clinical and survey observations into a meaningful diagnostic measure. Data on visual symptoms from 1,100 children attending school in Los Angeles were obtained using the Convergence Insufficiency Symptom Survey (CISS); and for more than 300 students in that sample, 35 additional measures were taken, including acuity, cover test near and far, near point of convergence, near point of accommodation, accommodative facility, vergence ranges, tracking ability, and oral reading fluency. A preliminary analysis of data from the 15-item, 5-category CISS and 15 clinical variables from 103 grade school students who reported convergence problems (CISS scores of 16 or higher) suggests that the clinical and survey observations will be optimally combined in a multidimensional model.

  7. Research on robot navigation vision sensor based on grating projection stereo vision

    Science.gov (United States)

    Zhang, Xiaoling; Luo, Yinsheng; Lin, Yuchi; Zhu, Lei

    2016-10-01

    A novel visual navigation method based on grating projection stereo vision for mobile robot in dark environment is proposed. This method is combining with grating projection profilometry of plane structured light and stereo vision technology. It can be employed to realize obstacle detection, SLAM (Simultaneous Localization and Mapping) and vision odometry for mobile robot navigation in dark environment without the image match in stereo vision technology and without phase unwrapping in the grating projection profilometry. First, we research the new vision sensor theoretical, and build geometric and mathematical model of the grating projection stereo vision system. Second, the computational method of 3D coordinates of space obstacle in the robot's visual field is studied, and then the obstacles in the field is located accurately. The result of simulation experiment and analysis shows that this research is useful to break the current autonomous navigation problem of mobile robot in dark environment, and to provide the theoretical basis and exploration direction for further study on navigation of space exploring robot in the dark and without GPS environment.

  8. Multiple Moving Obstacles Avoidance of Service Robot using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Achmad Jazidie

    2011-12-01

    Full Text Available In this paper, we propose a multiple moving obstacles avoidance using stereo vision for service robots in indoor environments. We assume that this model of service robot is used to deliver a cup to the recognized customer from the starting point to the destination. The contribution of this research is a new method for multiple moving obstacle avoidance with Bayesian approach using stereo camera. We have developed and introduced 3 main modules to recognize faces, to identify multiple moving obstacles and to maneuver of robot. A group of people who is walking will be tracked as a multiple moving obstacle, and the speed, direction, and distance of the moving obstacles is estimated by a stereo camera in order that the robot can maneuver to avoid the collision. To overcome the inaccuracies of vision sensor, Bayesian approach is used for estimate the absense and direction of obstacles. We present the results of the experiment of the service robot called Srikandi III which uses our proposed method and we also evaluate its performance. Experiments shown that our proposed method working well, and Bayesian approach proved increasing the estimation perform for absence and direction of moving obstacle.

  9. Dynamic Trajectory Extraction from Stereo Vision Using Fuzzy Clustering

    Science.gov (United States)

    Onishi, Masaki; Yoda, Ikushi

    In recent years, many human tracking researches have been proposed in order to analyze human dynamic trajectory. These researches are general technology applicable to various fields, such as customer purchase analysis in a shopping environment and safety control in a (railroad) crossing. In this paper, we present a new approach for tracking human positions by stereo image. We use the framework of two-stepped clustering with k-means method and fuzzy clustering to detect human regions. In the initial clustering, k-means method makes middle clusters from objective features extracted by stereo vision at high speed. In the last clustering, c-means fuzzy method cluster middle clusters based on attributes into human regions. Our proposed method can be correctly clustered by expressing ambiguity using fuzzy clustering, even when many people are close to each other. The validity of our technique was evaluated with the experiment of trajectories extraction of doctors and nurses in an emergency room of a hospital.

  10. Loss of binocular vision as direct cause for misrouting of temporal retinal fibers in albinism.

    Science.gov (United States)

    Banihani, Saleh M

    2015-10-01

    In humans, the nasal retina projects to the contralateral hemisphere, whereas the temporal retina projects ipsilaterally. The nasotemporal line that divides the retina into crossed and uncrossed parts coincides with the vertical meridian through the fovea. This normal projection of the retina is severely altered in albinism, in which the nasotemporal line shifted into the temporal retina with temporal retinal fibers cross the midline at the optic chiasm. This study proposes the loss of binocular vision as direct cause for misrouting of temporal retinal fibers and shifting of the nasotemporal line temporally in albinism. It is supported by many observations that clearly indicate that loss of binocular vision causes uncrossed retinal fibers to cross the midline. This hypothesis may alert scientists and clinicians to find ways to prevent or minimize the loss of binocular vision that may occur in some diseases such as albinism and early squint. Hopefully, this will minimize the misrouting of temporal fibers and improve vision in such diseases. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Lateral geniculate lamination and the corticogeniculate projection: a potential role in binocular vision in the quadrants.

    Science.gov (United States)

    McIlwain, J T

    1995-02-21

    Students of vision have long speculated about the functions of the distinct lamination of the lateral geniculate nucleus and the massive return projection from visual cortex to this thalamic structure. This paper proposes that these features of the visual system reflect, in part at least, its solution to a geometric problem inherent in binocular vision. Points in the visual quadrants are imaged on geometrically non-corresponding retinal points. Two such retinal loci, optically conjugate with a given visual point at one fixation distance or angle, will correspond to no single visual point at other fixation distances or angles. This raises potential problems for visual cortical neurons sensitive to a narrow range of binocular disparities. If these neurons are to function optimally at a variety of fixation distances and angles, their disparity tuning must be variable. It is suggested here that such dynamic disparity tuning is effected by the corticogeniculate projection acting on the segregated ocular representations in the geniculate laminae.

  12. Robotic Arm Control Algorithm Based on Stereo Vision Using RoboRealm Vision

    Directory of Open Access Journals (Sweden)

    SZABO, R.

    2015-05-01

    Full Text Available The goal of this paper is to present a stereo computer vision algorithm intended to control a robotic arm. Specific points on the robot joints are marked and recognized in the software. Using a dedicated set of mathematic equations, the movement of the robot is continuously computed and monitored with webcams. Positioning error is finally analyzed.

  13. Avian binocular vision: It's not just about what birds can see, it's also about what they can't.

    Directory of Open Access Journals (Sweden)

    Luke P Tyrrell

    Full Text Available With the exception of primates, most vertebrates have laterally placed eyes. Binocular vision in vertebrates has been implicated in several functions, including depth perception, contrast discrimination, etc. However, the blind area in front of the head that is proximal to the binocular visual field is often neglected. This anterior blind area is important when discussing the evolution of binocular vision because its relative length is inversely correlated with the width of the binocular field. Therefore, species with wider binocular fields also have shorter anterior blind areas and objects along the mid-sagittal plane can be imaged at closer distances. Additionally, the anterior blind area is of functional significance for birds because the beak falls within this blind area. We tested for the first time some specific predictions about the functional role of the anterior blind area in birds controlling for phylogenetic effects. We used published data on visual field configuration in 40 species of birds and measured beak and skull parameters from museum specimens. We found that birds with proportionally longer beaks have longer anterior blind areas and thus narrower binocular fields. This result suggests that the anterior blind area and beak visibility do play a role in shaping binocular fields, and that binocular field width is not solely determined by the need for stereoscopic vision. In visually guided foragers, the ability to see the beak-and how much of the beak can be seen-varies predictably with foraging habits. For example, fish- and insect-eating specialists can see more of their own beak than birds eating immobile food can. But in non-visually guided foragers, there is no consistent relationship between the beak and anterior blind area. We discuss different strategies-wide binocular fields, large eye movements, and long beaks-that minimize the potential negative effects of the anterior blind area. Overall, we argue that there is more to

  14. Visual tracking in stereo. [by computer vision system

    Science.gov (United States)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  15. A computer implementation of a theory of human stereo vision.

    Science.gov (United States)

    Grimson, W E

    1981-05-12

    Recently, Marr & Poggio (1979) presented a theory of human stereo vision. An implementation of that theory is presented, and consists of five steps. (i) The left and right images are each filtered with masks of four sizes that increase with eccentricity; the shape of these masks is given by delta 2G, the Laplacian of a Gaussian function. (ii) Zero crossings in the filtered images are found along horizontal scan lines. (iii) For each mask size, matching takes place between zero crossings of the same sign and roughly the same orientation in the two images, for a range of disparities up to about the width of the mask's central region. Within this disparity range, it can be shown that false targets pose only a simple problem. (iv) The output of the wide masks can control vergence movements, thus causing small masks to come into correspondence. In this way, the matching process gradually moves from dealing with large disparities at a low resolution to dealing with small disparities at a high resolution. (v) When a correspondence is achieved, it is stored in a dynamic buffer, called the 2 1/2-dimensional sketch. To support the adequacy of the Marr-Poggio model of human stereo vision, the implementation was tested on a wide range of stereograms from the human stereopsis literature. The performance of the implementation is illustrated and compared with human perception. Also statistical assumptions made by Marr & Poggio are supported by comparison with statistics found in practice. Finally, the process of implementing the theory has led to the clarification and refinement of a number of details within the theory; these are discussed in detail.

  16. A stereo vision-based obstacle detection system in vehicles

    Science.gov (United States)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  17. Operational modal analysis on a VAWT in a large wind tunnel using stereo vision technique

    DEFF Research Database (Denmark)

    Najafi, Nadia; Schmidt Paulsen, Uwe

    2017-01-01

    2 x 105. VAWT dynamics were simulated using HAWC2. The stereo vision results and HAWC2 simulations agree within 4% except for mode 3 and 4. The high aerodynamic damping of one of the blades, in flatwise motion, would explain the gap between those two modes from simulation and stereo vision. A set...... in picking very closely spaced modes. Finally, the uncertainty of the 3D displacement measurement was evaluated by applying a generalized method based on the law of error propagation, for a linear camera model of the stereo vision system....

  18. Investigating the Importance of Stereo Displays for Helicopter Landing Simulation

    Science.gov (United States)

    2016-08-11

    Nvidia GeForce GTX 680 graphics card was used to administer the stereo acuity and fusion range tests. The tests were displayed on an Asus VG278HE 3D...a distance of 1 m on the Asus stereo monitor resulted in double vision (i.e., binocular fusion was broken) using the game controller as the circles

  19. The minimum test battery to screen for binocular vision anomalies: report 3 of the BAND study.

    Science.gov (United States)

    Hussaindeen, Jameel Rizwana; Rakshit, Archayeeta; Singh, Neeraj Kumar; Swaminathan, Meenakshi; George, Ronnie; Kapur, Suman; Scheiman, Mitchell; Ramani, Krishna Kumar

    2018-03-01

    This study aims to report the minimum test battery needed to screen non-strabismic binocular vision anomalies (NSBVAs) in a community set-up. When large numbers are to be screened we aim to identify the most useful test battery when there is no opportunity for a more comprehensive and time-consuming clinical examination. The prevalence estimates and normative data for binocular vision parameters were estimated from the Binocular Vision Anomalies and Normative Data (BAND) study, following which cut-off estimates and receiver operating characteristic curves to identify the minimum test battery have been plotted. In the receiver operating characteristic phase of the study, children between nine and 17 years of age were screened in two schools in the rural arm using the minimum test battery, and the prevalence estimates with the minimum test battery were found. Receiver operating characteristic analyses revealed that near point of convergence with penlight and red filter (> 7.5 cm), monocular accommodative facility ( 1.25 prism dioptres) were significant factors with cut-off values for best sensitivity and specificity. This minimum test battery was applied to a cohort of 305 children. The mean (standard deviation) age of the subjects was 12.7 (two) years with 121 males and 184 females. Using the minimum battery of tests obtained through the receiver operating characteristic analyses, the prevalence of NSBVAs was found to be 26 per cent. Near point of convergence with penlight and red filter > 10 cm was found to have the highest sensitivity (80 per cent) and specificity (73 per cent) for the diagnosis of convergence insufficiency. For the diagnosis of accommodative infacility, monocular accommodative facility with a cut-off of less than seven cycles per minute was the best predictor for screening (92 per cent sensitivity and 90 per cent specificity). The minimum test battery of near point of convergence with penlight and red filter, difference between distance and near

  20. Modelling biological depth perception in binocular vision: the local disparity estimation.

    Science.gov (United States)

    Lungeanu, D; Popa, C; Hotca, S; Macovievici, G

    1998-01-01

    This paper presents an approach to solving the correspondence problem in binocular vision and to computing the local horizontal disparity map using a biologically inspired algorithm. A computer application was developed as a tool for implementing, developing, and testing computational models for stereopsis, and also as a framework for integrating the disparity map with other perspective clues. Two models for stereopsis have been implemented. One of them is biologically inspired (it models the behaviour of simple and complex cells from the striate cortex) and the other is the 'classical' model of David Marr and Tomaso Poggio, implemented in order to have a comparison term for the simulation results. The paper details the results obtained on random-dot stereograms and on pairs of real images.

  1. Sensor Fusion - Sonar and Stereo Vision, Using Occupancy Grids and SIFT

    DEFF Research Database (Denmark)

    Plascencia, Alfredo; Bendtsen, Jan Dimon

    2006-01-01

    The main contribution of this paper is to present a sensor fusion approach to scene environment mapping as part of a SDF (Sensor Data Fusion) architecture. This approach involves combined sonar and stereo vision readings. Sonar readings are interpreted using probability density functions to the o......The main contribution of this paper is to present a sensor fusion approach to scene environment mapping as part of a SDF (Sensor Data Fusion) architecture. This approach involves combined sonar and stereo vision readings. Sonar readings are interpreted using probability density functions...

  2. The implementation of depth measurement and related algorithms based on binocular vision in embedded AM5728

    Science.gov (United States)

    Deng, Zhiwei; Li, Xicai; Shi, Junsheng; Huang, Xiaoqiao; Li, Feiyan

    2018-01-01

    Depth measurement is the most basic measurement in various machine vision, such as automatic driving, unmanned aerial vehicle (UAV), robot and so on. And it has a wide range of use. With the development of image processing technology and the improvement of hardware miniaturization and processing speed, real-time depth measurement using dual cameras has become a reality. In this paper, an embedded AM5728 and the ordinary low-cost dual camera is used as the hardware platform. The related algorithms of dual camera calibration, image matching and depth calculation have been studied and implemented on the hardware platform, and hardware design and the rationality of the related algorithms of the system are tested. The experimental results show that the system can realize simultaneous acquisition of binocular images, switching of left and right video sources, display of depth image and depth range. For images with a resolution of 640 × 480, the processing speed of the system can be up to 25 fps. The experimental results show that the optimal measurement range of the system is from 0.5 to 1.5 meter, and the relative error of the distance measurement is less than 5%. Compared with the PC, ARM11 and DMCU hardware platforms, the embedded AM5728 hardware is good at meeting real-time depth measurement requirements in ensuring the image resolution.

  3. Distribution of Binocular Vision Anomalies and Refractive Errors in Iranian Children With Learning Disabilities

    Directory of Open Access Journals (Sweden)

    Yekta

    2015-11-01

    Full Text Available Background Visual problems in children contribute to learning disorders, which are one of the most influential factors in learning. Objectives The aim of the present study was to determine the prevalence of refractive and binocular vision errors in children with learning disorders. Patients and Methods In this cross-sectional study, 406 children with learning disorders with a mean age of 8.56 ± 2.4 years were evaluated. Examinations included the determination of refractive errors with an auto-refractometer and static retinoscopy, measurement of visual acuity with a Snellen chart, evaluation of ocular deviation, and measurement of stereopsis, amplitude of accommodation, and near point of convergence. Results Of the 406 participants, 319 (78.6% were emmetropic in the right eye, 14.5% had myopia, and 6.9% had hyperopia according to cycloplegic refraction. Astigmatism was detected in 75 (18.5% children. In our study, 89.9% of the children had no deviation, 1.0% had esophoria, and 6.4% had exophoria . In addition, 2.2% of the children had suppression. The near point of convergence ranged from 3 to 18 cm, with a mean of 10.12 ± 3.274 cm. Moreover, 98.5 and 98.0% of the participants achieved complete vision with the best correction in the right and left eye, respectively. The best corrected visual acuity in the right and left eye was achieved in 98.5 and 98.0% of the children, respectively. Conclusions The pattern of visual impairment in learning-impaired children is not much different from that in normal children; however, because these children may not be able to express themselves clearly, lack of correct diagnosis and appropriate treatment has resulted in a marked defect in recognizing visual disorders in these children. Therefore, gaining knowledge of the prevalence of refractive errors in children with learning disorders can be considered the first step in their treatment.

  4. A Real-Time Embedded System for Stereo Vision Preprocessing Using an FPGA

    DEFF Research Database (Denmark)

    Kjær-Nielsen, Anders; Jensen, Lars Baunegaard With; Sørensen, Anders Stengaard

    2008-01-01

    In this paper a low level vision processing node for use in existing IEEE 1394 camera setups is presented. The processing node is a small embedded system, that utilizes an FPGA to perform stereo vision preprocessing at rates limited by the bandwidth of IEEE 1394a (400Mbit). The system is used...... extraction, and undistortion and rectification. The latency of the system when running at 2x15fps is 30ms....

  5. Stereo Vision and 3D Reconstruction on a Distributed Memory System

    NARCIS (Netherlands)

    Kuijpers, N.H.L.; Paar, G.; Lukkien, J.J.

    1996-01-01

    An important research topic in image processing is stereo vision. The objective is to compute a 3-dimensional representation of some scenery from two 2-dimensional digital images. Constructing a 3-dimensional representation involves finding pairs of pixels from the two images which correspond to the

  6. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC

    Directory of Open Access Journals (Sweden)

    Zhangwei Chen

    2013-03-01

    Full Text Available This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users’ configuration data. The Sum of Absolute Differences (SAD algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels.

  7. SAD-based stereo vision machine on a System-on-Programmable-Chip (SoPC).

    Science.gov (United States)

    Zhang, Xiang; Chen, Zhangwei

    2013-03-04

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels.

  8. Development and Application of the Stereo Vision Tracking System with Virtual Reality

    Directory of Open Access Journals (Sweden)

    Chia-Sui Wang

    2015-01-01

    Full Text Available A virtual reality (VR driver tracking verification system is created, of which the application to stereo image tracking and positioning accuracy is researched in depth. In the research, the feature that the stereo vision system has image depth is utilized to improve the error rate of image tracking and image measurement. In a VR scenario, the function collecting behavioral data of driver was tested. By means of VR, racing operation is simulated and environmental (special weathers such as raining and snowing and artificial (such as sudden crossing road by pedestrians, appearing of vehicles from dead angles, roadblock variables are added as the base for system implementation. In addition, the implementation is performed with human factors engineered according to sudden conditions that may happen easily in driving. From experimental results, it proves that the stereo vision system created by the research has an image depth recognition error rate within 0.011%. The image tracking error rate may be smaller than 2.5%. In the research, the image recognition function of stereo vision is utilized to accomplish the data collection of driver tracking detection. In addition, the environmental conditions of different simulated real scenarios may also be created through VR.

  9. Image registration algorithm for high-voltage electric power live line working robot based on binocular vision

    Science.gov (United States)

    Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping

    2017-12-01

    In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.

  10. Spatial-frequency dependent binocular imbalance in amblyopia

    Science.gov (United States)

    Kwon, MiYoung; Wiecek, Emily; Dakin, Steven C.; Bex, Peter J.

    2015-01-01

    While amblyopia involves both binocular imbalance and deficits in processing high spatial frequency information, little is known about the spatial-frequency dependence of binocular imbalance. Here we examined binocular imbalance as a function of spatial frequency in amblyopia using a novel computer-based method. Binocular imbalance at four spatial frequencies was measured with a novel dichoptic letter chart in individuals with amblyopia, or normal vision. Our dichoptic letter chart was composed of band-pass filtered letters arranged in a layout similar to the ETDRS acuity chart. A different chart was presented to each eye of the observer via stereo-shutter glasses. The relative contrast of the corresponding letter in each eye was adjusted by a computer staircase to determine a binocular Balance Point at which the observer reports the letter presented to either eye with equal probability. Amblyopes showed pronounced binocular imbalance across all spatial frequencies, with greater imbalance at high compared to low spatial frequencies (an average increase of 19%, p amblyopia and as an outcome measure for recovery of binocular vision following therapy. PMID:26603125

  11. Near Point of Convergence Break for Different Age Groups in Turkish Population with Normal Binocular Vision: Normative Data

    Directory of Open Access Journals (Sweden)

    Nihat Sayın

    2013-12-01

    Full Text Available Purpose: The purpose of this study was to evaluate the near point of convergence break in Turkish population with normal binocular vision and to obtain the normative data for the near point of convergence break in different age groups. Such database has not been previously reported. Material and Method: In this prospective study, 329 subjects with normal binocular vision (age range, 3-72 years were evaluated. The near point of convergence break was measured 4 times repeatedly with an accommodative target. Mean values of near point of convergence break were provided for these age groups (≤10, 11-20, 21-30, 31-40, 41-50, 51-60, and >60 years old. A statistical comparison (one-way ANOVA and post-hoc test of these values between age groups was performed. A correlation between the near point of convergence break and age was evaluated by Pearson’s correlation test. Results: The mean value for near point of convergence break was 2.46±1.88 (0.5-14 cm. Specifically, 95% of measurements in all subjects were 60 year-old age groups in the near point of convergence break values (p=0.0001, p=0.0001, p=0.006, p=0.001, p= 0.004. A mild positive correlation was observed between the increase in near point of convergence break and increase of age (r=0.355 (p<0.001. Discussion: The values derived from a relatively large study population to establish a normative database for the near point of convergence break in the Turkish population with normal binocular vision are in relevance with age. This database has not been previously reported. (Turk J Ophthalmol 2013; 43: 402-6

  12. Stereo-Vision-Based Relative Pose Estimation for the Rendezvous and Docking of Noncooperative Satellites

    Directory of Open Access Journals (Sweden)

    Feng Yu

    2014-01-01

    Full Text Available Autonomous on-orbit servicing is expected to play an important role in future space activities. Acquiring the relative pose information and inertial parameters of target is one of the key technologies for autonomous capturing. In this paper, an estimation method of relative pose based on stereo vision is presented for the final phase of the rendezvous and docking of noncooperative satellites. The proposed estimation method utilizes the sparse stereo vision algorithm instead of the dense stereo algorithm. The method consists of three parts: (1 body frame reestablishment, which establishes the body-fixed frame for the target satellite using the natural features on the surface and measures the relative attitude based on TRIAD and QUEST; (2 translational parameter estimation, which designs a standard Kalman filter to estimate the translational states and the location of mass center; (3 rotational parameter estimation, which designs an extended Kalman filter and an unscented Kalman filter, respectively, to estimate the rotational states and all the moment-of-inertia ratios. Compared to the dense stereo algorithm, the proposed method can avoid degeneracy when the target has a high degree of axial symmetry and reduce the number of sensors. The validity of the proposed method is verified by numerical simulations.

  13. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    Energy Technology Data Exchange (ETDEWEB)

    M. D. McKay; M. O. Anderson; R. A. Kinoshita; W. D. Willis

    1999-02-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an ongoing effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the ''feel'' of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  14. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Kinoshita, Robert Arthur; Anderson, Matthew Oley; Mckay, Mark D; Willis, Walter David

    1999-04-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an on going effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the "feel" of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  15. An Omnidirectional Stereo Vision-Based Smart Wheelchair

    Directory of Open Access Journals (Sweden)

    Yutaka Satoh

    2007-06-01

    Full Text Available To support safe self-movement of the disabled and the aged, we developed an electric wheelchair that realizes the functions of detecting both the potential hazards in a moving environment and the postures and gestures of a user by equipping an electric wheelchair with the stereo omnidirectional system (SOS, which is capable of acquiring omnidirectional color image sequences and range data simultaneously in real time. The first half of this paper introduces the SOS and the basic technology behind it. To use the multicamera system SOS on an electric wheelchair, we developed an image synthesizing method of high speed and high quality and the method of recovering SOS attitude changes by using attitude sensors is also introduced. This method allows the SOS to be used without being affected by the mounting attitude of the SOS. The second half of this paper introduces the prototype electric wheelchair actually manufactured and experiments conducted using the prototype. The usability of the electric wheelchair is also discussed.

  16. Systematic construction and control of stereo nerve vision network in intelligent manufacturing

    Science.gov (United States)

    Liu, Hua; Wang, Helong; Guo, Chunjie; Ding, Quanxin; Zhou, Liwei

    2017-10-01

    A system method of constructing stereo vision by using neural network is proposed, and the operation and control mechanism in actual operation are proposed. This method makes effective use of the neural network in learning and memory function, by after training with samples. Moreover, the neural network can learn the nonlinear relationship in the stereoscopic vision system and the internal and external orientation elements. These considerations are Worthy of attention, which includes limited constraints, the scientific of critical group, the operating speed and the operability in technical aspects. The results support our theoretical forecast.

  17. TRISH: the Toronto-IRIS Stereo Head

    Science.gov (United States)

    Jenkin, Michael R. M.; Milios, Evangelos E.; Tsotsos, John K.

    1992-03-01

    This paper introduces and motivates the design of a controllable stereo vision head. The Toronto IRIS stereo head (TRISH) is a binocular camera mount consisting of two AGC, fixed focal length color cameras forming a verging stereo pair. TRISH is capable of version (rotation of the eyes about the vertical axis so as to maintain a constant disparity), vergence (rotation of the eyes about the vertical axis so as to change the disparity), pan (rotation of the entire head about the vertical axis), and tilt (rotation of the eyes about the horizontal axis). One novel characteristic of the design is that the two cameras can rotate about their own optical axes (torsion). Torsion movement makes it possible to minimize the vertical component of the two-dimensional search which is associated with stereo processing in verging stereo systems.

  18. Binocular Therapy for Childhood Amblyopia Improves Vision Without Breaking Interocular Suppression.

    Science.gov (United States)

    Bossi, Manuela; Tailor, Vijay K; Anderson, Elaine J; Bex, Peter J; Greenwood, John A; Dahlmann-Noor, Annegret; Dakin, Steven C

    2017-06-01

    Amblyopia is a common developmental visual impairment characterized by a substantial difference in acuity between the two eyes. Current monocular treatments, which promote use of the affected eye by occluding or blurring the fellow eye, improve acuity, but are hindered by poor compliance. Recently developed binocular treatments can produce rapid gains in visual function, thought to be as a result of reduced interocular suppression. We set out to develop an effective home-based binocular treatment system for amblyopia that would engage high levels of compliance but that would also allow us to assess the role of suppression in children's response to binocular treatment. Balanced binocular viewing therapy (BBV) involves daily viewing of dichoptic movies (with "visibility" matched across the two eyes) and gameplay (to monitor compliance and suppression). Twenty-two children (3-11 years) with anisometropic (n = 7; group 1) and strabismic or combined mechanism amblyopia (group 2; n = 6 and 9, respectively) completed the study. Groups 1 and 2 were treated for a maximum of 8 or 24 weeks, respectively. The treatment elicited high levels of compliance (on average, 89.4% ± 24.2% of daily dose in 68.23% ± 12.2% of days on treatment) and led to a mean improvement in acuity of 0.27 logMAR (SD 0.22) for the amblyopic eye. Importantly, acuity gains were not correlated with a reduction in suppression. BBV is a binocular treatment for amblyopia that can be self-administered at home (with remote monitoring), producing rapid and substantial benefits that cannot be solely mediated by a reduction in interocular suppression.

  19. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    Science.gov (United States)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  20. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study

    International Nuclear Information System (INIS)

    Suenaga, Hideyuki; Tran, Huy Hoang; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Takato, Tsuyoshi

    2015-01-01

    This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject’s anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Accurate registration of the volunteer’s anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications. The online version of this article (doi:10.1186/s12880-015-0089-5) contains supplementary material, which is available to authorized users

  1. On-line measurement of ski-jumper trajectory: combining stereo vision and shape description

    Science.gov (United States)

    Nunner, T.; Sidla, O.; Paar, G.; Nauschnegg, B.

    2010-01-01

    Ski jumping has continuously raised major public interest since the early 70s of the last century, mainly in Europe and Japan. The sport undergoes high-level analysis and development, among others, based on biodynamic measurements during the take-off and flight phase of the jumper. We report on a vision-based solution for such measurements that provides a full 3D trajectory of unique points on the jumper's shape. During the jump synchronized stereo images are taken by a calibrated camera system in video rate. Using methods stemming from video surveillance, the jumper is detected and localized in the individual stereo images, and learning-based deformable shape analysis identifies the jumper's silhouette. The 3D reconstruction of the trajectory takes place on standard stereo forward intersection of distinct shape points, such as helmet top or heel. In the reported study, the measurements are being verified by an independent GPS measurement mounted on top of the Jumper's helmet, synchronized to the timing of camera exposures. Preliminary estimations report an accuracy of +/-20 cm in 30 Hz imaging frequency within 40m trajectory. The system is ready for fully-automatic on-line application on ski-jumping sites that allow stereo camera views with an approximate base-distance ratio of 1:3 within the entire area of investigation.

  2. A Framework for Obstacles Avoidance of Humanoid Robot Using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2013-04-01

    Full Text Available In this paper, we propose a framework for multiple moving obstacles avoidance strategy using stereo vision for humanoid robot in indoor environment. We assume that this model of humanoid robot is used as a service robot to deliver a cup to customer from starting point to destination point. We have successfully developed and introduced three main modules to recognize faces, to identify multiple moving obstacles and to initiate a maneuver. A group of people who are walking will be tracked as multiple moving obstacles. Predefined maneuver to avoid obstacles is applied to robot because the limitation of view angle from stereo camera to detect multiple obstacles. The contribution of this research is a new method for multiple moving obstacles avoidance strategy with Bayesian approach using stereo vision based on the direction and speed of obstacles. Depth estimation is used to obtain distance calculation between obstacles and the robot. We present the results of the experiment of the humanoid robot called Gatotkoco II which is used our proposed method and evaluate its performance. The proposed moving obstacles avoidance strategy was tested empirically and proved effective for humanoid robot.

  3. Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.

    Science.gov (United States)

    Rumei Zhang; Hao Liu; Jianda Han

    2017-07-01

    Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.

  4. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    Directory of Open Access Journals (Sweden)

    Chien-Lun Hou

    2011-02-01

    Full Text Available In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  5. Influence of stereopsis and abnormal binocular vision on ocular and systemic discomfort while watching 3D television.

    Science.gov (United States)

    Kim, S-H; Suh, Y-W; Yun, C; Yoo, E-J; Yeom, J-H; Cho, Y A

    2013-11-01

    To evaluate the degree of three-dimensional (3D) perception and ocular and systemic discomfort in patients with abnormal binocular vision (ABV), and their relationship to stereoacuity while watching a 3D television (TV). Patients with strabismus, amblyopia, or anisometropia older than 9 years were recruited for the ABV group (98 subjects). Normal volunteers were enrolled in the control group (32 subjects). Best-corrected visual acuity, refractive errors, angle of strabismus, and stereoacuity were measured. After watching 3D TV for 20 min, a survey was conducted to evaluate the degree of 3D perception, and ocular and systemic discomfort while watching 3D TV. One hundred and thirty subjects were enrolled in this study. The ABV group included 49 patients with strabismus, 22 with amblyopia, and 27 with anisometropia. The ABV group showed worse stereoacuity at near and distant fixation (Pwatching 3D TV. However, ocular and systemic discomfort was more closely related to better stereopsis.

  6. Binocular astronomy

    CERN Document Server

    Tonkin, Stephen F

    2007-01-01

    This book contains everything an astronomer needs to know about binocular observing. The book takes an in-depth look at the instruments themselves. It has sections on evaluating and buying binoculars and binocular telescopes, their care, mounting, and accessories.

  7. Efficient Optical Flow and Stereo Vision for Velocity Estimation and Obstacle Avoidance on an Autonomous Pocket Drone

    OpenAIRE

    McGuire, K.N.; de Croon, G.C.H.E.; de Wagter, C.; Tuyls, Karl; Kappen, Hilbert

    2017-01-01

    Miniature Micro Aerial Vehicles (MAV) are very suitable for flying in indoor environments, but autonomous navigation is challenging due to their strict hardware limitations. This paper presents a highly efficient computer vision algorithm called Edge-FS for the determination of velocity and depth. It runs at 20 Hz on a 4 g stereo camera with an embedded STM32F4 microprocessor (168 MHz, 192 kB) and uses feature histograms to calculate optical flow and stereo disparity. The stereo-based distanc...

  8. Stereo-vision and 3D reconstruction for nuclear mobile robots

    International Nuclear Information System (INIS)

    Lecoeur-Taibi, I.; Vacherand, F.; Rivallin, P.

    1991-01-01

    In order to perceive the geometric structure of the surrounding environment of a mobile robot, a 3D reconstruction system has been developed. Its main purpose is to provide geometric information to an operator who has to telepilot the vehicle in a nuclear power plant. The perception system is split into two parts: the vision part and the map building part. Vision is enhanced with a fusion process that rejects bas samples over space and time. The vision is based on trinocular stereo-vision which provides a range image of the image contours. It performs line contour correlation on horizontal image pairs and vertical image pairs. The results are then spatially fused in order to have one distance image, with a quality independent of the orientation of the contour. The 3D reconstruction is based on grid-based sensor fusion. As the robot moves and perceives its environment, distance data is accumulated onto a regular square grid, taking into account the uncertainty of the sensor through a sensor measurement statistical model. This approach allows both spatial and temporal fusion. Uncertainty due to sensor position and robot position is also integrated into the absolute local map. This system is modular and generic and can integrate 2D laser range finder and active vision. (author)

  9. Flexible calibration method for line-structured light based on binocular vision

    Science.gov (United States)

    Zhu, Ye; Wang, Lianpo; Gu, Yonggang; Zhai, Chao; Jin, Yi

    2017-10-01

    A new calibration technique for line-structured light scanning systems is proposed in this study. Compared with existing methods, this technique is more flexible and practical. Complicated operations, precision calibration target and positioning devices are all unnecessary. Only a blank planar board, which is placed at several(at least two) arbitrary orientations, and an additional camera that is calibrated under the global coordinate system are required. Control points are obtained through improved binocular intersection algorithm that avoids corresponding points matching and then used to calculate the light stripe plane through least square fitting. Experiment results indicate that the system calibrated by this technique is able to conduct surface measurement, offering an accuracy superior to 32μm(RMS).

  10. Performance Evaluation and Software Design for EVA Robotic Assistant Stereo Vision Heads

    Science.gov (United States)

    DiPaolo, Daniel

    2003-01-01

    The purpose of this project was to aid the EVA Robotic Assistant project by evaluating and designing the necessary interfaces for two stereo vision heads - the TracLabs Biclops pan-tilt-verge head, and the Helpmate Zebra pan-tilt-verge head. The first half of the project consisted of designing the necessary software interface so that the other modules of the EVA Robotic Assistant had proper access to all of the functionalities offered by each of the stereovision heads. This half took most of the project time, due to a lack of ready-made CORBA drivers for either of the heads. Once this was overcome, the evaluation stage of the project began. The second half of the project was to take these interfaces and to evaluate each of the stereo vision heads in terms of usefulness to the project. In the key project areas such as stability and reliability, the Zebra pan-tilt-verge head came out on top. However, the Biclops did have many more advantages over the Zebra, such as: lower power consumption, faster communications, and a simpler, cleaner API. Overall, the Biclops pan-tilt-verge head outperformed the Zebra pan-tilt-verge head.

  11. Creation Greenhouse Environment Map Using Localization of Edge of Cultivation Platforms Based on Stereo Vision

    Directory of Open Access Journals (Sweden)

    A Nasiri

    2017-10-01

    Full Text Available Introduction Stereo vision means the capability of extracting the depth based on analysis of two images taken from different angles of one scene. The result of stereo vision is a collection of three-dimensional points which describes the details of scene proportional to the resolution of the obtained images. Vehicle automatic steering and crop growth monitoring are two important operations in agricultural precision. The essential aspects of an automated steering are position and orientation of the agricultural equipment in relation to crop row, detection of obstacles and design of path planning between the crop rows. The developed map can provide this information in the real time. Machine vision has the capabilities to perform these tasks in order to execute some operations such as cultivation, spraying and harvesting. In greenhouse environment, it is possible to develop a map and perform an automatic control by detecting and localizing the cultivation platforms as the main moving obstacle. The current work was performed to meet a method based on the stereo vision for detecting and localizing platforms, and then, providing a two-dimensional map for cultivation platforms in the greenhouse environment. Materials and Methods In this research, two webcams, made by Microsoft Corporation with the resolution of 960×544, are connected to the computer via USB2 in order to produce a stereo parallel camera. Due to the structure of cultivation platforms, the number of points in the point cloud will be decreased by extracting the only upper and lower edges of the platform. The proposed method in this work aims at extracting the edges based on depth discontinuous features in the region of platform edge. By getting the disparity image of the platform edges from the rectified stereo images and translating its data to 3D-space, the point cloud model of the environments is constructed. Then by projecting the points to XZ plane and putting local maps together

  12. Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation

    Science.gov (United States)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-01-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  13. Stereo Vision Guiding for the Autonomous Landing of Fixed-Wing UAVs: A Saliency-Inspired Approach

    Directory of Open Access Journals (Sweden)

    Zhaowei Ma

    2016-03-01

    Full Text Available It is an important criterion for unmanned aerial vehicles (UAVs to land on the runway safely. This paper concentrates on stereo vision localization of a fixed-wing UAV's autonomous landing within global navigation satellite system (GNSS denied environments. A ground stereo vision guidance system imitating the human visual system (HVS is presented for the autonomous landing of fixed-wing UAVs. A saliency-inspired algorithm is presented and developed to detect flying UAV targets in captured sequential images. Furthermore, an extended Kalman filter (EKF based state estimation is employed to reduce localization errors caused by measurement errors of object detection and pan-tilt unit (PTU attitudes. Finally, stereo-vision-dataset-based experiments are conducted to verify the effectiveness of the proposed visual detection method and error correction algorithm. The compared results between the visual guidance approach and differential GPS-based approach indicate that the stereo vision system and detection method can achieve the better guiding effect.

  14. Error analysis in a stereo vision-based pedestrian detection sensor for collision avoidance applications.

    Science.gov (United States)

    Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.

  15. Motorcycle That See: Multifocal Stereo Vision Sensor for Advanced Safety Systems in Tilting Vehicles

    Directory of Open Access Journals (Sweden)

    Gustavo Gil

    2018-01-01

    Full Text Available Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications.

  16. Motorcycles that See: Multifocal Stereo Vision Sensor for Advanced Safety Systems in Tilting Vehicles

    Science.gov (United States)

    2018-01-01

    Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications. PMID:29351267

  17. Motorcycle That See: Multifocal Stereo Vision Sensor for Advanced Safety Systems in Tilting Vehicles.

    Science.gov (United States)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-01-19

    Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications.

  18. Taxi drivers' accidents: how binocular vision problems are related to their rate and severity in terms of the number of victims.

    Science.gov (United States)

    Maag, U; Vanasse, C; Dionne, G; Laberge-Nadeau, C

    1997-03-01

    Recent studies do not agree on the possible relationship between medical conditions and traffic safety; most of them do not control for exposure factors. In this study, we estimate the effect of binocular vision problems on taxi drivers' distributions of crashes (frequency). Moreover, given a crash, we estimate the effect of binocular vision problems on the distributions of the number of victims per crash (dead or injured). Our data and models permit the simultaneous consideration of many variables: age, medical condition, exposure factors measured by distance driven and time behind the wheel, qualitative risk factors, other characteristics of the driver, and crash circumstances in the models for the number of victims. Results show that taxi drivers have a large average number of crashes per year, larger for those with binocular vision problems compared with healthy ones, but not more severe in terms of the number of victims. The driver's past record (number of crashes and demerit points in the previous year) is a significant predictor of the number of crashes. Age is associated significantly with the number and the severity of crashes with older drivers having a better record than the youngest group (30 years old or less).

  19. Stereo vision for fully automatic volumetric flow measurement in urban drainage structures

    Science.gov (United States)

    Sirazitdinova, Ekaterina; Pesic, Igor; Schwehn, Patrick; Song, Hyuk; Satzger, Matthias; Weingärtner, Dorothea; Sattler, Marcus; Deserno, Thomas M.

    2017-06-01

    Overflows in urban drainage structures, or sewers, must be prevented on time to avoid their undesirable consequences. An effective monitoring system able to measure volumetric flow in sewers is needed. Existing stateof-the-art technologies are not robust against harsh sewer conditions and, therefore, cause high maintenance expenses. Having the goal of fully automatic, robust and non-contact volumetric flow measurement in sewers, we came up with an original and innovative idea of a vision-based system for volumetric flow monitoring. On the contrast to existing video-based monitoring systems, we introduce a second camera to the setup and exploit stereo-vision aiming of automatic calibration to the real world. Depth of the flow is estimated as the difference between distances from the camera to the water surface and from the camera to the canal's bottom. Camerato-water distance is recovered automatically using large-scale stereo matching, while the distance to the canal's bottom is measured once upon installation. Surface velocity is calculated using cross-correlation template matching. Individual natural particles in the flow are detected and tracked throughout the sequence of images recorded over a fixed time interval. Having the water level and the surface velocity estimated and knowing the geometry of the canal we calculate the discharge. The preliminary evaluation has shown that the average error of depth computation was 3 cm, while the average error of surface velocity resulted in 5 cm/s. Due to the experimental design, these errors are rough estimates: at each acquisition session the reference depth value was measured only once, although the variation in volumetric flow and the gradual transitions between the automatically detected values indicated that the actual depth level has varied. We will address this issue in the next experimental session.

  20. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    Science.gov (United States)

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-09-14

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  1. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    Directory of Open Access Journals (Sweden)

    Basam Musleh

    2016-09-01

    Full Text Available Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels and the vehicle environment (meters depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  2. Binocular vision, the optic chiasm, and their associations with vertebrate motor behavior

    Directory of Open Access Journals (Sweden)

    Matz Lennart Larsson

    2015-07-01

    Full Text Available Ipsilateral retinal projections (IRP in the optic chiasm (OC vary considerably. Most animal groups possess laterally situated eyes and no or few IRP, but, e.g. cats and primates have frontal eyes and high proportions of IRP. The traditional hypothesis that bifocal vision developed to enable predation or to increase perception in restricted light conditions applies mainly to mammals. The eye-forelimb (EF hypothesis presented here suggests that the reception of visual feedback of limb movements in the limb steering cerebral hemisphere was the fundamental mechanism behind the OC evolution. In other words, that evolutionary change in the OC was necessary to preserve hemispheric autonomy. In the majority of vertebrates, motor processing, tactile, proprioceptive, and visual information involved in steering the hand (limb, paw, fin is primarily received only in the contralateral hemisphere, while multisensory information from the ipsilateral limb is minimal. Since the involved motor nuclei, somatosensory areas, and vision neurons are situated in same hemisphere, the neuronal pathways involved will be relatively short, optimizing the size of the brain. That would not have been possible without, evolutionary modifications of IRP. Multiple axon-guidance genes, which determine whether axons will cross the midline or not, have shaped the OC anatomy. Evolutionary change in the OC seems to be key to preserving hemispheric autonomy when the body and eye evolve to fit new ecological niches. The EF hypothesis may explain the low proportion of IRP in birds, reptiles, and most fishes; the relatively high proportions of IRP in limbless vertebrates; high proportions of IRP in arboreal, in contrast to ground-dwelling, marsupials; the lack of IRP in dolphins; abundant IRP in primates and most predatory mammals, and why IRP emanate exclusively from the temporal retina. The EF hypothesis seams applicable to vertebrates in general and hence more parsimonious than

  3. Improved Binocular Outcomes Following Binocular Treatment for Childhood Amblyopia.

    Science.gov (United States)

    Kelly, Krista R; Jost, Reed M; Wang, Yi-Zhong; Dao, Lori; Beauchamp, Cynthia L; Leffler, Joel N; Birch, Eileen E

    2018-03-01

    Childhood amblyopia can be treated with binocular games or movies that rebalance contrast between the eyes, which is thought to reduce depth of interocular suppression so the child can experience binocular vision. While visual acuity gains have been reported following binocular treatment, studies rarely report gains in binocular outcomes (i.e., stereoacuity, suppression) in amblyopic children. Here, we evaluated binocular outcomes in children who had received binocular treatment for childhood amblyopia. Data for amblyopic children enrolled in two ongoing studies were pooled. The sample included 41 amblyopic children (6 strabismic, 21 anisometropic, 14 combined; age 4-10 years; ≤4 prism diopters [PD]) who received binocular treatment (20 game, 21 movies; prescribed 9-10 hours treatment). Amblyopic eye visual acuity and binocular outcomes (Randot Preschool Stereoacuity, extent of suppression, and depth of suppression) were assessed at baseline and at 2 weeks. Mean amblyopic eye visual acuity (P game adherence, 100% movie adherence). Depth of suppression was reduced more in children aged <8 years than in those aged ≥8 years (P = 0.004). Worse baseline depth of suppression was correlated with a larger depth of suppression reduction at 2 weeks (P = 0.001). After 2 weeks, binocular treatment in amblyopic children improved visual acuity and binocular outcomes, reducing the extent and depth of suppression and improving stereoacuity. Binocular treatments that rebalance contrast to overcome suppression are a promising additional option for treating amblyopia.

  4. SVMT: a MATLAB toolbox for stereo-vision motion tracking of motor reactivity.

    Science.gov (United States)

    Vousdoukas, M I; Perakakis, P; Idrissi, S; Vila, J

    2012-10-01

    This article presents a Matlab-based stereo-vision motion tracking system (SVMT) for the detection of human motor reactivity elicited by sensory stimulation. It is a low-cost, non-intrusive system supported by Graphical User Interface (GUI) software, and has been successfully tested and integrated in a broad array of physiological recording devices at the Human Physiology Laboratory in the University of Granada. The SVMT GUI software handles data in Matlab and ASCII formats. Internal functions perform lens distortion correction, camera geometry definition, feature matching, as well as data clustering and filtering to extract 3D motion paths of specific body areas. System validation showed geo-rectification errors below 0.5 mm, while feature matching and motion paths extraction procedures were successfully validated with manual tracking and RMS errors were typically below 2% of the movement range. The application of the system in a psychophysiological experiment designed to elicit a startle motor response by the presentation of intense and unexpected acoustic stimuli, provided reliable data probing dynamical features of motor responses and habituation to repeated stimulus presentations. The stereo-geolocation and motion tracking performance of the SVMT system were successfully validated through comparisons with surface EMG measurements of eyeblink startle, which clearly demonstrate the ability of SVMT to track subtle body movement, such as those induced by the presentation of intense acoustic stimuli. Finally, SVMT provides an efficient solution for the assessment of motor reactivity not only in controlled laboratory settings, but also in more open, ecological environments. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    Science.gov (United States)

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Surface modeling method for aircraft engine blades by using speckle patterns based on the virtual stereo vision system

    Science.gov (United States)

    Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang

    2018-03-01

    A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.

  7. Comparative morphometry of facial surface models obtained from a stereo vision system in a healthy population

    Science.gov (United States)

    López, Leticia; Gastélum, Alfonso; Chan, Yuk Hin; Delmas, Patrice; Escorcia, Lilia; Márquez, Jorge

    2014-11-01

    Our goal is to obtain three-dimensional measurements of craniofacial morphology in a healthy population, using standard landmarks established by a physical-anthropology specialist and picked from computer reconstructions of the face of each subject. To do this, we designed a multi-stereo vision system that will be used to create a data base of human faces surfaces from a healthy population, for eventual applications in medicine, forensic sciences and anthropology. The acquisition process consists of obtaining the depth map information from three points of views, each depth map is obtained from a calibrated pair of cameras. The depth maps are used to build a complete, frontal, triangular-surface representation of the subject face. The triangular surface is used to locate the landmarks and the measurements are analyzed with a MATLAB script. The classification of the subjects was done with the aid of a specialist anthropologist that defines specific subject indices, according to the lengths, areas, ratios, etc., of the different structures and the relationships among facial features. We studied a healthy population and the indices from this population will be used to obtain representative averages that later help with the study and classification of possible pathologies.

  8. The study of human health effect induced by depth information of stereo vision film

    Directory of Open Access Journals (Sweden)

    Xiao Wang

    2015-09-01

    Full Text Available The stereo vision results from the interaction between geometrical optics and visual psychology. Large depth will bring discomforts for the results of ghosting and flicker. The relevance of the ratio of jumping out depth (RJD and electroencephalogram (EEG gravity frequency (GF was explored to reflect human health under different three-dimensional (3D depth information (mainly the negative disparity displayed on a three-dimensional television (3D-TV with shutter glasses. EEG was obtained from 10 volunteers when they were watching 3D film segments with different negative disparities. The brain GF map shows that the depth information has a stronger influence on the frontal lobe than on the occipital lobe. For regression analysis, nonlinear curve fittings of GF to RJD in Fp1, F3, O2 and T5 channels were mainly performed when RJD ranged from 0 to 3.4, while linear fittings were performed in some special RJD ranges. It also confirms that RJD above 2.2 may lead to discomfort to human body. Finally, it suggests a suitable RJD range for people to watch from the objective method. The outcomes can be used as a guidance to decrease human discomforts induced by 3D production.

  9. Adaptive optics binocular visual simulator to study stereopsis in the presence of aberrations.

    Science.gov (United States)

    Fernández, Enrique J; Prieto, Pedro M; Artal, Pablo

    2010-11-01

    A binocular adaptive optics visual simulator has been devised for the study of stereopsis and of binocular vision in general. The apparatus is capable of manipulating the aberrations of each eye separately while subjects perform visual tests. The correcting device is a liquid-crystal-on-silicon spatial light modulator permitting the control of aberrations in the two eyes of the observer simultaneously in open loop. The apparatus can be operated as an electro-optical binocular phoropter with two micro-displays projecting different scenes to each eye. Stereo-acuity tests (three-needle test and random-dot stereograms) have been programmed for exploring the performance of the instrument. As an example, stereo-acuity has been measured in two subjects in the presence of defocus and/or trefoil, showing a complex relationship between the eye's optical quality and stereopsis. This instrument might serve for a better understanding of the relationship of binocular vision and stereopsis performance and the eye's aberrations.

  10. Binocular astronomy

    CERN Document Server

    Tonkin, Stephen

    2014-01-01

    Binoculars have, for many, long been regarded as an “entry level” observational tool, and relatively few have used them as a serious observing instrument. This is changing! Many people appreciate the relative comfort of two-eyed observing, but those who use binoculars come to realize that they offer more than comfort. The view of the stars is more aesthetically pleasing and therefore binocular observers tend to observe more frequently and for longer periods. Binocular Astronomy, 2nd Edition, extends its coverage of small and medium binoculars to large and giant (i.e., up to 300mm aperture) binoculars and also binoviewers, which brings the work into the realm of serious observing instruments. Additionally, it goes far deeper into the varying optical characteristics of binoculars, giving newcomers and advanced astronomers the information needed to make informed choices on purchasing a pair. It also covers relevant aspects of the physiology of binocular (as in “both eyes”) observation. The first edition ...

  11. Global stereo matching algorithm based on disparity range estimation

    Science.gov (United States)

    Li, Jing; Zhao, Hong; Gu, Feifei

    2017-09-01

    The global stereo matching algorithms are of high accuracy for the estimation of disparity map, but the time-consuming in the optimization process still faces a curse, especially for the image pairs with high resolution and large baseline setting. To improve the computational efficiency of the global algorithms, a disparity range estimation scheme for the global stereo matching is proposed to estimate the disparity map of rectified stereo images in this paper. The projective geometry in a parallel binocular stereo vision is investigated to reveal a relationship between two disparities at each pixel in the rectified stereo images with different baselines, which can be used to quickly obtain a predicted disparity map in a long baseline setting estimated by that in the small one. Then, the drastically reduced disparity ranges at each pixel under a long baseline setting can be determined by the predicted disparity map. Furthermore, the disparity range estimation scheme is introduced into the graph cuts with expansion moves to estimate the precise disparity map, which can greatly save the cost of computing without loss of accuracy in the stereo matching, especially for the dense global stereo matching, compared to the traditional algorithm. Experimental results with the Middlebury stereo datasets are presented to demonstrate the validity and efficiency of the proposed algorithm.

  12. Three-dimensional sensing methodology combining stereo vision and phase-measuring profilometry based on dynamic programming

    Science.gov (United States)

    Lee, Hyunki; Kim, Min Young; Moon, Jeon Il

    2017-12-01

    Phase measuring profilometry and moiré methodology have been widely applied to the three-dimensional shape measurement of target objects, because of their high measuring speed and accuracy. However, these methods suffer from inherent limitations called a correspondence problem, or 2π-ambiguity problem. Although a kind of sensing method to combine well-known stereo vision and phase measuring profilometry (PMP) technique simultaneously has been developed to overcome this problem, it still requires definite improvement for sensing speed and measurement accuracy. We propose a dynamic programming-based stereo PMP method to acquire more reliable depth information and in a relatively small time period. The proposed method efficiently fuses information from two stereo sensors in terms of phase and intensity simultaneously based on a newly defined cost function of dynamic programming. In addition, the important parameters are analyzed at the view point of the 2π-ambiguity problem and measurement accuracy. To analyze the influence of important hardware and software parameters related to the measurement performance and to verify its efficiency, accuracy, and sensing speed, a series of experimental tests were performed with various objects and sensor configurations.

  13. Stereo and motion cues in preattentive vision processing--some experiments with random-dot stereographic image sequences.

    Science.gov (United States)

    Pong, T C; Kenner, M A; Otis, J

    1990-01-01

    Low-level preattentive vision processing is of special interest since it seems the logical starting point of all vision processing. Exploration of the human visual processing system at this level is, however, extremely difficult, but can be facilitated by the use of stroboscopic presentation of sequences of random-dot stereograms, which contain only local spatial and temporal information and therefore limit the processing of these images to the low level. Four experiments are described in which such sequences were used to explore the relationships between various cues (optical flow, stereo disparity, and accretion and deletion of image points) at the low level. To study these relationships in more depth, especially the resolution of conflicting information among the cues, some of the image sequences presented information not usually encountered in 'natural' scenes. The results indicate that the processing of these cues is undertaken as a set of cooperative processes.

  14. Application of stereo-imaging technology to medical field.

    Science.gov (United States)

    Nam, Kyoung Won; Park, Jeongyun; Kim, In Young; Kim, Kwang Gi

    2012-09-01

    There has been continuous development in the area of stereoscopic medical imaging devices, and many stereoscopic imaging devices have been realized and applied in the medical field. In this article, we review past and current trends pertaining to the application stereo-imaging technologies in the medical field. We describe the basic principles of stereo vision and visual issues related to it, including visual discomfort, binocular disparities, vergence-accommodation mismatch, and visual fatigue. We also present a brief history of medical applications of stereo-imaging techniques, examples of recently developed stereoscopic medical devices, and patent application trends as they pertain to stereo-imaging medical devices. Three-dimensional (3D) stereo-imaging technology can provide more realistic depth perception to the viewer than conventional two-dimensional imaging technology. Therefore, it allows for a more accurate understanding and analysis of the morphology of an object. Based on these advantages, the significance of stereoscopic imaging in the medical field increases in accordance with the increase in the number of laparoscopic surgeries, and stereo-imaging technology plays a key role in the diagnoses of the detailed morphologies of small biological specimens. The application of 3D stereo-imaging technology to the medical field will help improve surgical accuracy, reduce operation times, and enhance patient safety. Therefore, it is important to develop more enhanced stereoscopic medical devices.

  15. Self calibration of the stereo vision system of the Chang'e-3 lunar rover based on the bundle block adjustment

    Science.gov (United States)

    Zhang, Shuo; Liu, Shaochuang; Ma, Youqing; Qi, Chen; Ma, Hao; Yang, Huan

    2017-06-01

    The Chang'e-3 was the first lunar soft landing probe of China. It was composed of the lander and the lunar rover. The Chang'e-3 successful landed in the northwest of the Mare Imbrium in December 14, 2013. The lunar rover completed the movement, imaging and geological survey after landing. The lunar rover equipped with a stereo vision system which was made up of the Navcam system, the mast mechanism and the inertial measurement unit (IMU). The Navcam system composed of two cameras with the fixed focal length. The mast mechanism was a robot with three revolute joints. The stereo vision system was used to determine the position of the lunar rover, generate the digital elevation models (DEM) of the surrounding region and plan the moving paths of the lunar rover. The stereo vision system must be calibrated before use. The control field could be built to calibrate the stereo vision system in the laboratory on the earth. However, the parameters of the stereo vision system would change after the launch, the orbital changes, the braking and the landing. Therefore, the stereo vision system should be self calibrated on the moon. An integrated self calibration method based on the bundle block adjustment is proposed in this paper. The bundle block adjustment uses each bundle of ray as the basic adjustment unit and the adjustment is implemented in the whole photogrammetric region. The stereo vision system can be self calibrated with the proposed method under the unknown lunar environment and all parameters can be estimated simultaneously. The experiment was conducted in the ground lunar simulation field. The proposed method was compared with other methods such as the CAHVOR method, the vanishing point method, the Denavit-Hartenberg method, the factorization method and the weighted least-squares method. The analyzed result proved that the accuracy of the proposed method was superior to those of other methods. Finally, the proposed method was practical used to self calibrate the

  16. A Novel Approach to Calibrating Multifunctional Binocular Stereovision Sensor

    International Nuclear Information System (INIS)

    Xue, T; Zhu, J G; Wu, B; Ye, S H

    2006-01-01

    We present a novel multifunctional binocular stereovision sensor for various threedimensional (3D) inspection tasks. It not only avoids the so-called correspondence problem of passive stereo vision, but also possesses the uniform mathematical model. We also propose a novel approach to estimating all the sensor parameters with free-position planar reference object. In this technique, the planar pattern can be moved freely by hand. All the camera intrinsic and extrinsic parameters with coefficient of lens radial and tangential distortion are estimated, and sensor parameters are calibrated based on the 3D measurement model and optimized with the feature point constraint algorithm using the same views in the camera calibration stage. The proposed approach greatly reduces the cost of the calibration equipment, and it is flexible and practical for the vision measurement. It shows that this method has high precision by experiment, and the sensor measured relative error of space length excels 0.3%

  17. Research on dimensional measurement method of mechanical parts based on stereo vision

    Science.gov (United States)

    Zhou, Zhuoyun; Zhang, Xuewu; Shen, Haodong; Zhang, Zhuo; Fan, Xinnan

    2015-10-01

    This paper researches on the key and difficult issues in stereo measurement deeply, including camera calibration, feature extraction, stereo matching and depth computation, and then put forwards a novel matching method combined the seed region growing and SIFT feature matching. It first uses SIFT characteristics as matching criteria for feature points matching, and then takes the feature points as seed points for region growing to get better depth information. Experiments are conducted to validate the efficiency of the proposed method using standard matching graphs, and then the proposed method is applied to dimensional measurement of mechanical parts. The results show that the measurement error is less than 0.5mm for medium sized mechanical parts, which can meet the demands of precision measurement.

  18. A Matlab-Based Testbed for Integration, Evaluation and Comparison of Heterogeneous Stereo Vision Matching Algorithms

    Directory of Open Access Journals (Sweden)

    Raul Correal

    2016-11-01

    Full Text Available Stereo matching is a heavily researched area with a prolific published literature and a broad spectrum of heterogeneous algorithms available in diverse programming languages. This paper presents a Matlab-based testbed that aims to centralize and standardize this variety of both current and prospective stereo matching approaches. The proposed testbed aims to facilitate the application of stereo-based methods to real situations. It allows for configuring and executing algorithms, as well as comparing results, in a fast, easy and friendly setting. Algorithms can be combined so that a series of processes can be chained and executed consecutively, using the output of a process as input for the next; some additional filtering and image processing techniques have been included within the testbed for this purpose. A use case is included to illustrate how these processes are sequenced and its effect on the results for real applications. The testbed has been conceived as a collaborative and incremental open-source project, where its code is accessible and modifiable, with the objective of receiving contributions and releasing future versions to include new algorithms and features. It is currently available online for the research community.

  19. Imporved method for stereo vision-based human detection for a mobile robot following a target person

    Directory of Open Access Journals (Sweden)

    Ali, Badar

    2015-05-01

    Full Text Available Interaction between humans and robots is a fundamental need for assistive and service robots. Their ability to detect and track people is a basic requirement for interaction with human beings. This article presents a new approach to human detection and targeted person tracking by a mobile robot. Our work is based on earlier methods that used stereo vision-based tracking linked directly with Hu moment-based detection. The earlier technique was based on the assumption that only one person is present in the environment – the target person – and it was not able to handle more than this one person. In our novel method, we solved this problem by using the Haar-based human detection method, and included a target person selection step before initialising tracking. Furthermore, rather than linking the Kalman filter directly with human detection, we implemented the tracking method before the Kalman filter-based estimation. We used the Pioneer 3AT robot, equipped with stereo camera and sonars, as the test platform.

  20. Generalization of Figure-Ground Segmentation from Binocular to Monocular Vision in an Embodied Biological Brain Model

    Science.gov (United States)

    2011-08-01

    figure and ground the luminance cue breaks down and gestalt contours can fail to pop out. In this case we rely on color, which, having weak stereopsis...have very poor visual acuity, while human vision is rather sharp. This is potentially due, in part, to the longer learning curve human vision...physically changed by conceptual knowledge, allowing us to make certain conceptual generalizations at the speed of visual object recognition [1]. The

  1. Interactive stereo games to improve vision in children with amblyopia using dichoptic stimulation

    Science.gov (United States)

    Herbison, Nicola; Ash, Isabel M.; MacKeith, Daisy; Vivian, Anthony; Purdy, Jonathan H.; Fakis, Apostolos; Cobb, Sue V.; Hepburn, Trish; Eastgate, Richard M.; Gregson, Richard M.; Foss, Alexander J. E.

    2015-03-01

    Amblyopia is a common condition affecting 2% of all children and traditional treatment consists of either wearing a patch or penalisation. We have developed a treatment using stereo technology, not to provide a 3D image but to allow dichoptic stimulation. This involves presenting an image with the same background to both eyes but with features of interest removed from the image presented to the normal eye with the aim to preferentially stimulated visual development in the amblyopic, or lazy, eye. Our system, called I-BiT can use either a game or a video (DVD) source as input. Pilot studies show that this treatment is effective with short treatment times and has proceeded to randomised controlled clinical trial. The early indications are that the treatment has a high degree of acceptability and corresponding good compliance.

  2. Acceleration of stereo-matching on multi-core CPU and GPU

    OpenAIRE

    Tian, Xu; Cockshott, Paul; Oehler, Susanne

    2014-01-01

    This paper presents an accelerated version of a\\ud dense stereo-correspondence algorithm for two different parallelism\\ud enabled architectures, multi-core CPU and GPU. The\\ud algorithm is part of the vision system developed for a binocular\\ud robot-head in the context of the CloPeMa 1 research project.\\ud This research project focuses on the conception of a new clothes\\ud folding robot with real-time and high resolution requirements\\ud for the vision system. The performance analysis shows th...

  3. Comparison on testability of visual acuity, stereo acuity and colour vision tests between children with learning disabilities and children without learning disabilities in government primary schools.

    Science.gov (United States)

    Abu Bakar, Nurul Farhana; Chen, Ai-Hong

    2014-02-01

    Children with learning disabilities might have difficulties to communicate effectively and give reliable responses as required in various visual function testing procedures. The purpose of this study was to compare the testability of visual acuity using the modified Early Treatment Diabetic Retinopathy Study (ETDRS) and Cambridge Crowding Cards, stereo acuity using Lang Stereo test II and Butterfly stereo tests and colour perception using Colour Vision Test Made Easy (CVTME) and Ishihara's Test for Colour Deficiency (Ishihara Test) between children in mainstream classes and children with learning disabilities in special education classes in government primary schools. A total of 100 primary school children (50 children from mainstream classes and 50 children from special education classes) matched in age were recruited in this cross-sectional comparative study. The testability was determined by the percentage of children who were able to give reliable respond as required by the respective tests. 'Unable to test' was defined as inappropriate response or uncooperative despite best efforts of the screener. The testability of the modified ETDRS, Butterfly stereo test and Ishihara test for respective visual function tests were found lower among children in special education classes ( P learning disabilities. Modifications of vision testing procedures are essential for children with learning disabilities.

  4. Stereo-vision three-dimensional reconstruction of curvilinear structures imaged with a TEM.

    Science.gov (United States)

    Oveisi, Emad; Letouzey, Antoine; De Zanet, Sandro; Lucas, Guillaume; Cantoni, Marco; Fua, Pascal; Hébert, Cécile

    2018-01-01

    Deriving accurate three-dimensional (3-D) structural information of materials at the nanometre level is often crucial for understanding their properties. Tomography in transmission electron microscopy (TEM) is a powerful technique that provides such information. It is however demanding and sometimes inapplicable, as it requires the acquisition of multiple images within a large tilt arc and hence prolonged exposure to electrons. In some cases, prior knowledge about the structure can tremendously simplify the 3-D reconstruction if incorporated adequately. Here, a novel algorithm is presented that is able to produce a full 3-D reconstruction of curvilinear structures from stereo pair of TEM images acquired within a small tilt range that spans from only a few to tens of degrees. Reliability of the algorithm is demonstrated through reconstruction of a model 3-D object from its simulated projections, and is compared with that of conventional tomography. This method is experimentally demonstrated for the 3-D visualization of dislocation arrangements in a deformed metallic micro-pillar. Copyright © 2017. Published by Elsevier B.V.

  5. USING STEREO VISION TO SUPPORT THE AUTOMATED ANALYSIS OF SURVEILLANCE VIDEOS

    Directory of Open Access Journals (Sweden)

    M. Menze

    2012-07-01

    Full Text Available Video surveillance systems are no longer a collection of independent cameras, manually controlled by human operators. Instead, smart sensor networks are developed, able to fulfil certain tasks on their own and thus supporting security personnel by automated analyses. One well-known task is the derivation of people’s positions on a given ground plane from monocular video footage. An improved accuracy for the ground position as well as a more detailed representation of single salient people can be expected from a stereoscopic processing of overlapping views. Related work mostly relies on dedicated stereo devices or camera pairs with a small baseline. While this set-up is helpful for the essential step of image matching, the high accuracy potential of a wide baseline and the according good intersection geometry is not utilised. In this paper we present a stereoscopic approach, working on overlapping views of standard pan-tilt-zoom cameras which can easily be generated for arbitrary points of interest by an appropriate reconfiguration of parts of a sensor network. Experiments are conducted on realistic surveillance footage to show the potential of the suggested approach and to investigate the influence of different baselines on the quality of the derived surface model. Promising estimations of people’s position and height are retrieved. Although standard matching approaches show helpful results, future work will incorporate temporal dependencies available from image sequences in order to reduce computational effort and improve the derived level of detail.

  6. Efficient discrete Gabor functions for robot vision

    Science.gov (United States)

    Weiman, Carl F. R.

    1994-03-01

    A new discrete Gabor function provides subpixel resolution of phase while overcoming many of the computational burdens of current approaches to Gabor function implementation. Applications include hyperacuity measurement of binocular disparity and optic flow for stereo vision. Convolution is avoided by exploiting band-pass to subsample the image plane. A general purpose front end processor for robot vision, based on a wavelet interpretation of this discrete Gabor function, can be constructed by tessellating and pyramiding the elementary filter. Computational efficiency opens the door to real-time implementation which mimics many properties of the simple and complex cells in the visual cortex.

  7. In vivo near-infrared fluorescence three-dimensional positioning system with binocular stereovision.

    Science.gov (United States)

    Song, Bofan; Jin, Wei; Wang, Ying; Jin, Qinhan; Mu, Ying

    2014-01-01

    Fluorescence is a powerful tool for in-vivo imaging in living animals. The traditional in-vivo fluorescence imaging equipment is based on single-view two-dimensional imaging systems. However, they cannot meet the needs for accurate positioning during modern scientific research. A near-infrared in-vivo fluorescence imaging system is demonstrated, which has the capability of deep source signal detecting and three-dimensional positioning. A three-dimensional coordinates computing (TDCP) method including a preprocess algorithm is presented based on binocular stereo vision theory, to figure out the solution for diffusive nature of light in tissue and the emission spectra overlap of fluorescent labels. This algorithm is validated to be efficient to extract targets from multispectral images and determine the spot center of biological interests. Further data analysis indicates that this TDCP method could be used in three-dimensional positioning of the fluorescent target in small animals. The study also suggests that the combination of a large power laser and deep cooling charge-coupled device will provide an attractive approach for fluorescent detection from deep sources. This work demonstrates the potential of binocular stereo vision theory for three-dimensional positioning for living animal in-vivo imaging.

  8. Stereo vision-based tracking of soft tissue motion with application to online ablation control in laser microsurgery.

    Science.gov (United States)

    Schoob, Andreas; Kundrat, Dennis; Kahrs, Lüder A; Ortmaier, Tobias

    2017-08-01

    Recent research has revealed that image-based methods can enhance accuracy and safety in laser microsurgery. In this study, non-rigid tracking using surgical stereo imaging and its application to laser ablation is discussed. A recently developed motion estimation framework based on piecewise affine deformation modeling is extended by a mesh refinement step and considering texture information. This compensates for tracking inaccuracies potentially caused by inconsistent feature matches or drift. To facilitate online application of the method, computational load is reduced by concurrent processing and affine-invariant fusion of tracking and refinement results. The residual latency-dependent tracking error is further minimized by Kalman filter-based upsampling, considering a motion model in disparity space. Accuracy is assessed in laparoscopic, beating heart, and laryngeal sequences with challenging conditions, such as partial occlusions and significant deformation. Performance is compared with that of state-of-the-art methods. In addition, the online capability of the method is evaluated by tracking two motion patterns performed by a high-precision parallel-kinematic platform. Related experiments are discussed for tissue substitute and porcine soft tissue in order to compare performances in an ideal scenario and in a setup mimicking clinical conditions. Regarding the soft tissue trial, the tracking error can be significantly reduced from 0.72 mm to below 0.05 mm with mesh refinement. To demonstrate online laser path adaptation during ablation, the non-rigid tracking framework is integrated into a setup consisting of a surgical Er:YAG laser, a three-axis scanning unit, and a low-noise stereo camera. Regardless of the error source, such as laser-to-camera registration, camera calibration, image-based tracking, and scanning latency, the ablation root mean square error is kept below 0.21 mm when the sample moves according to the aforementioned patterns. Final

  9. Observation on postoperative binocular visual function reconstruction in intermittent exotropia children with binocular visual training

    Directory of Open Access Journals (Sweden)

    Yan Guo

    2013-07-01

    Full Text Available AIM:To observe the efficacy of the binocular vision training on intermittent exotropia children postoperative binocular visual function reconstruction.METHODS:From January 2010 to October 2011, 112 intermittent exotropia children were treated, divided into three groups, the first group used synoptophore for binocular visual training, the second group used the binocular visual training software for binocular visual training, the third group was control group, no binocular visual training. The postoperative far and near stereoacuity and postoperative 1 year eye position orthophoria rate of the three groups were observed and compared. RESULTS: The two groups of children with visual training, the far and near stereoacuity was significantly higher than that of the control group, the difference was significant. In the 1-year follow-up after surgery, the eye position orthophoria rate of the control group was significantly lower than the other two groups.CONCLUSION: Intermittent exotropia postoperative binocular visual training can significantly promote the reconstruction of children with binocular vision, reduce eye rollback rate, improve the success rate of surgery.

  10. The zone of comfort: Predicting visual discomfort with stereo displays

    Science.gov (United States)

    Shibata, Takashi; Kim, Joohwan; Hoffman, David M.; Banks, Martin S.

    2012-01-01

    Recent increased usage of stereo displays has been accompanied by public concern about potential adverse effects associated with prolonged viewing of stereo imagery. There are numerous potential sources of adverse effects, but we focused on how vergence–accommodation conflicts in stereo displays affect visual discomfort and fatigue. In one experiment, we examined the effect of viewing distance on discomfort and fatigue. We found that conflicts of a given dioptric value were slightly less comfortable at far than at near distance. In a second experiment, we examined the effect of the sign of the vergence–accommodation conflict on discomfort and fatigue. We found that negative conflicts (stereo content behind the screen) are less comfortable at far distances and that positive conflicts (content in front of screen) are less comfortable at near distances. In a third experiment, we measured phoria and the zone of clear single binocular vision, which are clinical measurements commonly associated with correcting refractive error. Those measurements predicted susceptibility to discomfort in the first two experiments. We discuss the relevance of these findings for a wide variety of situations including the viewing of mobile devices, desktop displays, television, and cinema. PMID:21778252

  11. Binocular combination of luminance profiles.

    Science.gov (United States)

    Ding, Jian; Levi, Dennis M

    2017-11-01

    Levelt (1965) and binocular combination of second-order contrast-modulated gratings (Experiment 3). We used the model obtained in Experiment 1 to predict the results of Experiments 2 and 3 and the results of our previous studies. Model simulations further refined the contrast space weight and contrast sensitivity functions that are installed in the model, and provide a reasonable account for rebalancing of imbalanced binocular vision by reducing the mean luminance in the dominant eye.

  12. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    Science.gov (United States)

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  13. Development of a 3D Parallel Mechanism Robot Arm with Three Vertical-Axial Pneumatic Actuators Combined with a Stereo Vision System

    Directory of Open Access Journals (Sweden)

    Hao-Ting Lin

    2011-12-01

    Full Text Available This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot’s end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H∞ tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end

  14. Approaches for Stereo Matching

    Directory of Open Access Journals (Sweden)

    Takouhi Ozanian

    1995-04-01

    Full Text Available This review focuses on the last decade's development of the computational stereopsis for recovering three-dimensional information. The main components of the stereo analysis are exposed: image acquisition and camera modeling, feature selection, feature matching and disparity interpretation. A brief survey is given of the well known feature selection approaches and the estimation parameters for this selection are mentioned. The difficulties in identifying correspondent locations in the two images are explained. Methods as to how effectively to constrain the search for correct solution of the correspondence problem are discussed, as are strategies for the whole matching process. Reasons for the occurrence of matching errors are considered. Some recently proposed approaches, employing new ideas in the modeling of stereo matching in terms of energy minimization, are described. Acknowledging the importance of computation time for real-time applications, special attention is paid to parallelism as a way to achieve the required level of performance. The development of trinocular stereo analysis as an alternative to the conventional binocular one, is described. Finally a classification based on the test images for verification of the stereo matching algorithms, is supplied.

  15. Ground Stereo Vision-Based Navigation for Autonomous Take-off and Landing of UAVs: A Chan-Vese Model Approach

    Directory of Open Access Journals (Sweden)

    Dengqing Tang

    2016-04-01

    Full Text Available This article aims at flying target detection and localization of a fixed-wing unmanned aerial vehicle (UAV autonomous take-off and landing within Global Navigation Satellite System (GNSS-denied environments. A Chan-Vese model–based approach is proposed and developed for ground stereo vision detection. Extended Kalman Filter (EKF is fused into state estimation to reduce the localization inaccuracy caused by measurement errors of object detection and Pan-Tilt unit (PTU attitudes. Furthermore, the region-of-interest (ROI setting up is conducted to improve the real-time capability. The present work contributes to real-time, accurate and robust features, compared with our previous works. Both offline and online experimental results validate the effectiveness and better performances of the proposed method against the traditional triangulation-based localization algorithm.

  16. [Binocular coordination during reading].

    Science.gov (United States)

    Bassou, L; Granié, M; Pugh, A K; Morucci, J P

    1992-01-01

    Is there an effect on binocular coordination during reading of oculomotor imbalance (heterophoria, strabismus and inadequate convergence) and of functional lateral characteristics (eye preference and perceptually privileged visual laterality)? Recordings of the binocular eye-movements of ten-year-old children show that oculomotor imbalances occur most often among children whose left visual perceptual channel is privileged, and that these subjects can present optomotor dissociation and manifest lack of motor coordination. Close binocular motor coordination is far from being the norm in reading. The faster reader displays saccades of differing spatial amplitude and the slower reader an oculomotor hyperactivity, especially during fixations. The recording of binocular movements in reading appears to be an excellent means of diagnosing difficulties related to visual laterality and to problems associated with oculomotor imbalance.

  17. Efficient Optical Flow and Stereo Vision for Velocity Estimation and Obstacle Avoidance on an Autonomous Pocket Drone

    NARCIS (Netherlands)

    McGuire, K.N.; de Croon, G.C.H.E.; de Wagter, C.; Tuyls, Karl; Kappen, Hilbert

    Micro Aerial Vehicles (FOV) are very suitable for flying in indoor environments, but autonomous navigation is challenging due to their strict hardware limitations. This paper presents a highly efficient computer vision algorithm called Edge-FS for the determination of velocity and depth. It runs at

  18. Assessing Binocular Interaction in Amblyopia and Its Clinical Feasibility

    Science.gov (United States)

    Kwon, MiYoung; Lu, Zhong-Lin; Miller, Alexandra; Kazlas, Melanie; Hunter, David G.; Bex, Peter J.

    2014-01-01

    Purpose To measure binocular interaction in amblyopes using a rapid and patient-friendly computer-based method, and to test the feasibility of the assessment in the clinic. Methods Binocular interaction was assessed in subjects with strabismic amblyopia (n = 7), anisometropic amblyopia (n = 6), strabismus without amblyopia (n = 15) and normal vision (n = 40). Binocular interaction was measured with a dichoptic phase matching task in which subjects matched the position of a binocular probe to the cyclopean perceived phase of a dichoptic pair of gratings whose contrast ratios were systematically varied. The resulting effective contrast ratio of the weak eye was taken as an indicator of interocular imbalance. Testing was performed in an ophthalmology clinic under 8 mins. We examined the relationships between our binocular interaction measure and standard clinical measures indicating abnormal binocularity such as interocular acuity difference and stereoacuity. The test-retest reliability of the testing method was also evaluated. Results Compared to normally-sighted controls, amblyopes exhibited significantly reduced effective contrast (∼20%) of the weak eye, suggesting a higher contrast requirement for the amblyopic eye compared to the fellow eye. We found that the effective contrast ratio of the weak eye covaried with standard clincal measures of binocular vision. Our results showed that there was a high correlation between the 1st and 2nd measurements (r = 0.94, pamblyopia. PMID:24959842

  19. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    Science.gov (United States)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  20. Unmixing binocular signals

    Directory of Open Access Journals (Sweden)

    Sidney R Lehky

    2011-08-01

    Full Text Available Incompatible images presented to the two eyes lead to perceptual oscillations in which one image at a time is visible. Early models portrayed this binocular rivalry as involving reciprocal inhibition between monocular representations of images, occurring at an early visual stage prior to binocular mixing. However, psychophysical experiments found conditions where rivalry could also occur at a higher, more abstract level of representation. In those cases, the rivalry was between image representations dissociated from eye-of-origin information, rather than between monocular representations from the two eyes. Moreover, neurophysiological recordings found the strongest rivalry correlate in inferotemporal cortex, a high-level, predominantly binocular visual area involved in object recognition, rather than early visual structures. An unresolved issue is how can the separate identities of the two images be maintained after binocular mixing in order for rivalry to be possible at higher levels? Here we demonstrate that after the two images are mixed, they can be unmixed at any subsequent stage using a physiologically plausible nonlinear signal-processing algorithm, non-negative matrix factorization, previously proposed for parsing object parts during object recognition. The possibility that unmixed left and right images can be regenerated at late stages within the visual system provides a mechanism for creating various binocular representations and interactions de novo in different cortical areas for different purposes, rather than inheriting then from early areas. This is a clear example how nonlinear algorithms can lead to highly non-intuitive behavior in neural information processing.

  1. Deep vision: an in-trawl stereo camera makes a step forward in monitoring the pelagic community.

    Directory of Open Access Journals (Sweden)

    Melanie J Underwood

    Full Text Available Ecosystem surveys are carried out annually in the Barents Sea by Russia and Norway to monitor the spatial distribution of ecosystem components and to study population dynamics. One component of the survey is mapping the upper pelagic zone using a trawl towed at several depths. However, the current technique with a single codend does not provide fine-scale spatial data needed to directly study species overlaps. An in-trawl camera system, Deep Vision, was mounted in front of the codend in order to acquire continuous images of all organisms passing. It was possible to identify and quantify of most young-of-the-year fish (e.g. Gadus morhua, Boreogadus saida and Reinhardtius hippoglossoides and zooplankton, including Ctenophora, which are usually damaged in the codend. The system showed potential for measuring the length of small organisms and also recorded the vertical and horizontal positions where individuals were imaged. Young-of-the-year fish were difficult to identify when passing the camera at maximum range and to quantify during high densities. In addition, a large number of fish with damaged opercula were observed passing the Deep Vision camera during heaving; suggesting individuals had become entangled in meshes farther forward in the trawl. This indicates that unknown numbers of fish are probably lost in forward sections of the trawl and that the heaving procedure may influence the number of fish entering the codend, with implications for abundance indices and understanding population dynamics. This study suggests modifications to the Deep Vision and the trawl to increase our understanding of the population dynamics.

  2. Emotion and Interhemispheric Interactions in Binocular Rivalry

    Directory of Open Access Journals (Sweden)

    K L Ritchie

    2013-10-01

    Full Text Available Previous research has shown that fear-related stimuli presented in peripheral vision are preferentially processed over stimuli depicting other emotions. Furthermore, emotional content can influence dominance duration in binocular rivalry, with the period of dominance for an emotional image (e.g. a fearful face being significantly longer than a neutral image (e.g. a neutral face or a house. Experiment 1 of the current study combined these two ideas to investigate the role of emotion in binocular rivalry with face/house pairs viewed in the periphery. The results showed that faces were perceived as more dominant than houses, and fearful faces more so than neutral faces, even when viewed in the periphery. Experiment 2 extended this paradigm to present a rival pair in the periphery in each hemifield, with each eye either viewing the same stimulus in each location (traditional condition, or a different stimulus in each location (Diaz-Caneja condition. The results showed that the two pairs tended to rival in synchrony only in the traditional condition. Taken together, the results show that face dominance and emotion dominance in binocular rivalry persist in the periphery, and that interhemispheric interactions in binocular rivalry depend on an eye- as opposed to an object-based mechanism.

  3. First Peruvian binoculars

    Science.gov (United States)

    Baldwin, Guillermo; Gonzales, Franco; Pérez S., Carlos

    2017-11-01

    In Peru, as in almost all Latin America, precision optical industry is almost null. One reason is the scarcity of human and technological resources. But, a few years ago, a masters and diploma university program in optical engineering was started in our university: Pontificia Universidad Católica del Perú1 (PUCP) in Lima. Also, an optical shop on precision optics was implemented. Some students were trained at CIO in Leon, Mexico. In order to motivate optical business startups in Peru we planned to show some possibilities of optical devices fabrication trough doing prototypes. So, we started doing a small reflective telescope for moon observation2, 3, where mirror and ocular polishing and opto-mechanics had priority. Aluminum evaporation was included. Now, we do a new step developing a binocular, as we know, it never was made before in Peru. This work includes the binocular geometric optics and opto-mechanical designs, the ocular manufacturing, and the binocular characterization of an 8x35 binocular for amateur observation.

  4. Binocular visual function of myopic pseudophakic monovision.

    Science.gov (United States)

    Hayashi, Ken; Yoshida, Motoaki; Sasaki, Hiroshi; Hirata, Akira

    2018-02-20

    To compare binocular visual function of myopic pseudophakic patients with myopic monovision to patients without monovision. Randomized comparative study METHODS: Sixty patients were randomized to one of two groups: patients whose refraction was targeted to -2.75 diopters (D) in the dominant eye and -1.75D in the nondominant eye (myopic monovision group), and patients whose refraction was targeted to -2.75D bilaterally (non-monovision group). Binocular uncorrected and corrected visual acuity at various distances was measured using an all-distance vision tester, and contrast visual acuity and near stereoacuity were examined. In the myopic monovision group mean refraction was -2.74D in the dominant eyes and -1.94D in the nondominant eyes, and in the non-monovision group it was -2.96D bilaterally. Mean binocular uncorrected distance (UDVA) and intermediate visual acuity (UIVA) from 0.5 m to 5.0 m were significantly better in the myopic monovision group than in the non-monovision group (P≤ 0.0134), while binocular uncorrected near visual acuity (UNVA) at 0.3 m did not differ significantly between groups. The distribution of UIVA and UDVA was significantly better in the myopic monovision group (P≤ 0.0035). Corrected visual acuity at any distance, photopic and mesopic contrast visual acuity, and stereoacuity did not differ significantly between groups. Patients with myopic monovision exhibited significantly better binocular UIVA and UDVA than those without monovision, while UNVA, corrected visual acuity, contrast sensitivity, and stereoacuity were comparable between groups, suggesting that this method is useful for patients who want to see near and intermediate distances without spectacles.

  5. Evaluation and development of a novel binocular treatment (I-BiT™) system using video clips and interactive games to improve vision in children with amblyopia ('lazy eye'): study protocol for a randomised controlled trial.

    Science.gov (United States)

    Foss, Alexander J; Gregson, Richard M; MacKeith, Daisy; Herbison, Nicola; Ash, Isabel M; Cobb, Sue V; Eastgate, Richard M; Hepburn, Trish; Vivian, Anthony; Moore, Diane; Haworth, Stephen M

    2013-05-20

    Amblyopia (lazy eye) affects the vision of approximately 2% of all children. Traditional treatment consists of wearing a patch over their 'good' eye for a number of hours daily, over several months. This treatment is unpopular and compliance is often low. Therefore results can be poor. A novel binocular treatment which uses 3D technology to present specially developed computer games and video footage (I-BiT™) has been studied in a small group of patients and has shown positive results over a short period of time. The system is therefore now being examined in a randomised clinical trial. Seventy-five patients aged between 4 and 8 years with a diagnosis of amblyopia will be randomised to one of three treatments with a ratio of 1:1:1 - I-BiT™ game, non-I-BiT™ game, and I-BiT™ DVD. They will be treated for 30 minutes once weekly for 6 weeks. Their visual acuity will be assessed independently at baseline, mid-treatment (week 3), at the end of treatment (week 6) and 4 weeks after completing treatment (week 10). The primary endpoint will be the change in visual acuity from baseline to the end of treatment. Secondary endpoints will be additional visual acuity measures, patient acceptability, compliance and the incidence of adverse events. This is the first randomised controlled trial using the I-BiT™ system. The results will determine if the I-BiT™ system is effective in the treatment of amblyopia and will also determine the optimal treatment for future development. ClinicalTrials.gov identifier: NCT01702727.

  6. Binocular vision: defining the historical directions.

    Science.gov (United States)

    Ono, Hiroshi; Wade, Nicholas J; Lillakas, Linda

    2009-01-01

    Ever since Kepler described the image-forming properties of the eye (400 years ago) there has been a widespread belief, which remains to this day, that an object seen with one eye is always seen where it is. Predictions made by Ptolemy in the first century, Alhazen in the eleventh, and Wells in the eighteenth, and supported by Towne, Hering, and LeConte in the nineteenth century, however, are contrary to this claimed veridicality. We discuss how among eighteenth- and nineteenth-century British researchers, particularly Porterfield, Brewster, and Wheatstone, the erroneous idea continued and also why observations made by Wells were neither understood nor appreciated. Finally, we discuss recent data, obtained with a new method, that further support Wells's predictions and which show that a distinction between headcentric and relative direction tasks is needed to appreciate the predictions.

  7. Change in vision, visual disability, and health after cataract surgery.

    Science.gov (United States)

    Helbostad, Jorunn L; Oedegaard, Maria; Lamb, Sarah E; Delbaere, Kim; Lord, Stephen R; Sletvold, Olav

    2013-04-01

    Cataract surgery improves vision and visual functioning; the effect on general health is not established. We investigated if vision, visual functioning, and general health follow the same trajectory of change the year after cataract surgery and if changes in vision explain changes in visual disability and general health. One-hundred forty-eight persons, with a mean (SD) age of 78.9 (5.0) years (70% bilateral surgery), were assessed before and 6 weeks and 12 months after surgery. Visual disability and general health were assessed by the CatQuest-9SF and the Short Formular-36. Corrected binocular visual acuity, visual field, stereo acuity, and contrast vision improved (P visual acuity evident up to 12 months (P = 0.034). Cataract surgery had an effect on visual disability 1 year later (P visual disability and general health 6 weeks after surgery. Vision improved and visual disability decreased in the year after surgery, whereas changes in general health and visual functioning were short-term effects. Lack of associations between changes in vision and self-reported disability and general health suggests that the degree of vision changes and self-reported health do not have a linear relationship.

  8. Attentional modulation of binocular rivalry

    Directory of Open Access Journals (Sweden)

    Chris ePaffen

    2011-09-01

    Full Text Available Ever since Wheatstone initiated the scientific study of binocular rivalry, it has been debated whether the phenomenon is under attentional control. In recent years, the issue of attentional modulation of binocular rivalry has seen a revival. Here we review the classical studies as well as recent advances in the study of attentional modulation of binocular rivalry. We show that (1 voluntary control over binocular rivalry is possible, yet limited, (2 both endogenous and exogenous attention influence perceptual dominance during rivalry, (3 diverting attention from rival displays does not arrest perceptual alternations, and that (4 rival targets by themselves can also attract attention. From a theoretical perspective, we suggest that attention affects binocular rivalry by modulating the effective contrast of the images in competition. This contrast enhancing effect of top-down attention is counteracted by a response attenuating effect of neural adaptation at early levels of visual processing, which weakens the response to the dominant image. Moreover, we conclude that although frontal and parietal brain areas involved in both binocular rivalry and visual attention overlap, an adapting reciprocal inhibition arrangement at early visual cortex is sufficient to trigger switches in perceptual dominance independently of a higher-level ‘selection’ mechanisms. Both of these processes are reciprocal and therefore self-balancing, with the consequence that complete attentional control over binocular rivalry can never be realized.

  9. Large Binocular Telescope Project

    Science.gov (United States)

    Hill, John M.; Salinari, Piero

    1998-08-01

    The Large Binocular Telescope (LBT) Project is a collaboration between institutions in Arizona, Germany, Italy, and Ohio. With the addition of the partners from Ohio State and Germany in February 1997, the Large Binocular Telescope Corporation has the funding required to build the full telescope populated with both 8.4 meter optical trans. The first of two 8.4 meter borosilicate honeycomb primary mirrors for LBT was cast at the Steward Observatory Mirror Lab in 1997. The baseline optical configuration of LBT includes adaptive infrared secondaries of a Gregorian design. The F/15 secondaries are undersized to provide a low thermal background focal plane. The interferometric focus combining the light from the two 8.4 meter primaries will reimage the two folded Gregorian focal planes to three central locations. The telescope elevation structure accommodates swing arms which allow rapid interchange of the various secondary and tertiary mirrors. Maximum stiffness and minimal thermal disturbance were important drivers for the design of the telescope in order to provide the best possible images for interferometric observations. The telescope structure accommodates installation of a vacuum bell jar for aluminizing the primary mirrors in-situ on the telescope. The detailed design of the telescope structure was completed in 1997 by ADS Italia (Lecco) and European Industrial Engineering (Mestre). A series of contracts for the fabrication and machining of the telescope structure had been placed at the end of 1997. The final enclosure design was completed at M3 Engineering & Technology (Tucson), EIE and ADS Italia. During 1997, the telescope pier and the concrete ring wall for the rotating enclosure were completed along with the steel structure of the fixed portion of the enclosure. The erection of the steel structure for the rotating portion of the enclosure will begin in the Spring of 1998.

  10. A stereovision model applied in bio-micromanipulation system based on stereo light microscope.

    Science.gov (United States)

    Wang, Yuezong

    2017-12-01

    A bio-micromanipulation system is designed for manipulating micro-objects with a length scale of tens or hundreds of microns based on stereo light microscope. The world coordinate reconstruction of points on the surface of micro-objects is an important goal for the micromanipulation. Traditional pinhole camera model is applied widely in macrocomputer vision. However, this model will output bad data with remarkable error if it is directly used to reconstruct three-dimensional world coordinates for stereo light microscope. Therefore, a novel and improved pinhole camera model applied in bio-micromanipulation system is proposed in this article. The new model is composed of binocular-pinhole model and error-correction model. The binocular-pinhole model is used to output the basic world coordinates. The error-correction model is used to correct the errors from the basic world coordinates and outputs the final high-precision world coordinates. The results show that the new model achieves a precision of 0.01 mm in the X direction, 0.01 mm in the Y direction, and 0.015 mm in the Z direction within a maximum reconstruction distance of 4.1 mm in the X direction, 2.9 mm in the Y direction, and 2.25 mm in the Z direction, and that traditional pinhole camera model achieves a lower and unsatisfactory precision of about 0.1 mm. © 2017 Wiley Periodicals, Inc.

  11. Calibration of a Stereo Radiation Detection Camera Using Planar Homography

    Directory of Open Access Journals (Sweden)

    Seung-Hae Baek

    2016-01-01

    Full Text Available This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.

  12. Complete Binocular Blindness as the First Manifestation of HIV-Related Cryptococcal Meningitis

    Science.gov (United States)

    Hong, Yun-Jeong; Kim, Ji-Young; Kwon, Seok-Beom; Song, Ki-Bong; Hwang, Sung-Hee; Min, Yang-Ki; Kwon, Ki-Han; Lee, Byung-Chul

    2007-01-01

    Ocular complications of HIV-related cryptococcal meningitis are reasonably common, but complete binocular blindness as the first manifestation of HIV is extremely rare. A 58-year-old man presented with binocular blindness. He experienced blurred vision for 3 days before the blindness. Mild pleocytosis was present in the cerebrospinal fluid, from which Cryptococcus neoformans was cultured. Serology revealed positivity for HIV antibody. He was treated with antifungal and antiretroviral therapy. This case indicates that HIV-related cryptococcal meningitis should be taken into consideration when determining the cause of unexpected sudden binocular blindness. PMID:19513136

  13. Vision by Man and Machine.

    Science.gov (United States)

    Poggio, Tomaso

    1984-01-01

    Studies of stereo vision guide research on how animals see and how computers might accomplish this human activity. Discusses a sequence of algorithms to first extract information from visual images and then to calculate the depths of objects in the three-dimensional world, concentrating on stereopsis (stereo vision). (JN)

  14. Robot Vision Library

    Science.gov (United States)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  15. Vision Examination Protocol for Archery Athletes Along With an Introduction to Sports Vision.

    Science.gov (United States)

    Mohammadi, Seyed Farzad; Aghazade Amiri, Mohammad; Naderifar, Homa; Rakhshi, Elham; Vakilian, Banafsheh; Ashrafi, Elham; Behesht-Nejad, Amir-Houshang

    2016-03-01

    Visual skills are one of the main pillars of intangible faculties of athletes that can influence their performance. Great number of vision tests used to assess the visual skills and it will be irrational to perform all vision tests for every sport. The purpose of this protocol article is to present a relatively comprehensive battery of tests and assessments on static and dynamic aspects of sight which seems relevant to sports vision and introduce the most useful ones for archery. Through extensive review of the literature, visual skills and respective tests were listed; such as 'visual acuity, 'contrast sensitivity', 'stereo-acuity', 'ocular alignment', and 'eye dominance'. Athletes were defined as "elite" and "non-elite" category based on their past performance. Dominance was considered for eye and hand; binocular or monocular aiming was planned to be recorded. Illumination condition was defined as to simulate the real archery condition to the extent possible. The full cycle of examinations and their order for each athlete was sketched (and estimated to take 40 minutes). Protocol was piloted in an eye hospital. Female and male archers aged 18 - 38 years who practiced compound and recurve archery with a history of more than 6 months were included. We managed to select and design a customized examination protocol for archery (a sight-intensive and aiming type of sports), serving skill assessment and research purposes. Our definition for elite and non-elite athletes can help to define sports talent and devise skill development methods as we compare the performance of these two groups. In our pilot, we identified 8 "archery figures" (by hand dominance, eye dominance and binocularity) and highlighted the concept "congruence" (dominant hand and eye in the same side) in archery performance.

  16. Computer vision

    Science.gov (United States)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  17. Pemancar Am Stereo

    OpenAIRE

    Amir, Ardi

    2011-01-01

    In this project created an AM STEREO transmitter that the signal can be received on receiving AM MONOand AM STEREO. To obtain broadcast quality, should be using the receiver AM STEREO. The advantages ofthis transmitter broadcasts a bit cleaner when compared with AM MONO, but the transmitter AM STEREOwas not hifi when compared with STEREO FM transmitter. Transmitter consists of four sections of the mostimportant are:-Isolator-Matrix Audio-Phase Modulator-Amplitude Modulator

  18. Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging

    Science.gov (United States)

    Apolinar Muñoz Rodríguez, J.; Mejía Alanís, Francisco Carlos

    2016-07-01

    An accurate technique to perform binocular self-calibration by means of an adaptive genetic algorithm based on a laser line is presented. In this calibration, the genetic algorithm computes the vision parameters through simulated binary crossover (SBX). To carry it out, the genetic algorithm constructs an objective function from the binocular geometry of the laser line projection. Then, the SBX minimizes the objective function via chromosomes recombination. In this algorithm, the adaptive procedure determines the search space via line position to obtain the minimum convergence. Thus, the chromosomes of vision parameters provide the minimization. The approach of the proposed adaptive genetic algorithm is to calibrate and recalibrate the binocular setup without references and physical measurements. This procedure leads to improve the traditional genetic algorithms, which calibrate the vision parameters by means of references and an unknown search space. It is because the proposed adaptive algorithm avoids errors produced by the missing of references. Additionally, the three-dimensional vision is carried out based on the laser line position and vision parameters. The contribution of the proposed algorithm is corroborated by an evaluation of accuracy of binocular calibration, which is performed via traditional genetic algorithms.

  19. DLP™-based dichoptic vision test system

    Science.gov (United States)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  20. the evaluation of vision in children using monocular vision acuity ...

    African Journals Online (AJOL)

    INTRODUCTION. Monocular visual acuity (VA) is a direct method of detecting amblyopia, a leading cause of monocular vision loss in the 2 to 7years age group . Binocular vision is fully established by 6 moths of age, while fusion is consistently strengthened until the age of 6 years when it is fully developed . It was observed ...

  1. P3-5: Temporal Interactions between Binocular Inputs in Visual Evoked-Potentials

    Directory of Open Access Journals (Sweden)

    Sunkyue Kim

    2012-10-01

    Full Text Available The interaction between neural activity driven by inputs through the two eyes were examined using visual evoked-potentials (VEP in normal human subjects. VEP recordings were obtained at the occipital electrodes using binocularly asynchronous pattern-reversal checkerboard stimuli: The patter-reversal times for the two eyes differed by 0, ±50, ±150, or ±350 ms, with the positive stimulus-onset asynchrony (SOA meaning that the right-eye reversal occurred first. For comparison, monocular VEPs were also obtained using trial conditions where the checkerboard pattern-reversals were shown to only one eye, while a blank field to the other. The VEPs of the various trial conditions were analyzed using both temporal and frequency analysis methods. Three observations were made: First, the N75 amplitude was significantly reduced in the ±50 ms SOA conditions. Second, on ±150 ms and ± ms SOA conditions, a negative potential was observed over the period when the stimuli were binocularly incongruent. Third, the alpha-band power was reduced and the beta-band power increased on asynchronous conditions, compared to the synchronous patter-reversal. These findings show that activities of binocular neurons in the visual cortices get modulated by binocular incongruity in the asynchronous pattern-reversal stimuli. Our stimuli may prove valuable in elucidating neural mechanisms of integration of binocular visual inputs, especially when combined with brain source-localization techniques and compared between normal subjects and patients with dysfunction in binocular vision.

  2. Development and Matching of Binocular Orientation Preference in Mouse V1

    Directory of Open Access Journals (Sweden)

    Basabi eBhaumik

    2014-07-01

    Full Text Available Eye-specific thalamic inputs converge in the primary visual cortex (V1 and form the basis of binocular vision. For normal binocular perceptions, such as depth and stereopsis, binocularly matched orientation preference between the two eyes is required. A critical period of binocular matching of orientation preference in mice during normal development is reported in literature. Using a reaction diffusion model we present the development of RF and orientation selectivity in mouse V1 and investigate the binocular orientation preference matching during the critical period. At the onset of the critical period the preferred orientations of the modeled cells are mostly mismatched in the two eyes and the mismatch decreases and reaches levels reported in juvenile mouse by the end of the critical period. At the end of critical period 39% of cells in binocular zone in our model cortex is orientation selective. In literature around 40% cortical cells are reported as orientation selective in mouse V1. The starting and the closing time for critical period determine the orientation preference alignment between the two eyes and orientation tuning in cortical cells. The absence of near neighbor interaction among cortical cells during the development of thalmo-cortical wiring causes a salt and pepper organization in the orientation preference map in mice. It also results in much lower % of orientation selective cells in mice as compared to ferrets and cats having organized orientation maps with pinwheels.

  3. Important areas of the central binocular visual field for daily functioning in the visually impaired.

    Science.gov (United States)

    Tabrett, Daryl R; Latham, Keziah

    2012-03-01

    To determine the areas of the central binocular visual field which correspond best with self-reported vision related activity limitations (VRAL) in individuals with visual impairment using a clinically relevant and accessible technique. One hundred participants with mixed visual impairment undertook binocular threshold visual field testing using a Humphrey 30-2 SITA Fast program. The Activity Inventory (AI) was administered to assess overall, mobility related and reading related self-reported VRAL as part of a face-to-face clinical interview. Different eccentricities of the binocular field (central 5, 5-10, and 10-30°) were compared to self-reported VRAL in bivariate analyses and further explored using multivariate analyses. All areas of the binocular visual field were significantly associated with self-reported VRAL in bivariate analyses, with greater field loss associated with increased VRAL (p visual fields and self-reported VRAL in people with visual impairment. Central binocular fields can be measured using a widely available threshold test in order to understand the likely functional limitations of those with vision loss, particularly in mobility tasks. Self-reported VRAL can be estimated using the regression equations and graphs provided and difficulty levels in specific tasks can be determined. Ophthalmic & Physiological Optics © 2012 The College of Optometrists.

  4. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  5. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    International Nuclear Information System (INIS)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  6. Realization for Chinese vehicle license plate recognition based on computer vision and fuzzy neural network

    Science.gov (United States)

    Yang, Yun; Zhang, Weigang; Guo, Pan

    2010-07-01

    The proposed approach in this paper is divided into three steps namely the location of plate, the segmentation of the characters and the recognition of the characters. The location algorithm is firstly consisted of two video captures to get high quality images, and estimates the size of vehicle plate in these images via parallel binocular stereo vision algorithm. Then the segmentation method extracts the edge of vehicle plate based on second generation non-orthogonal Haar wavelet transformation, and locates the vehicle plate according to the estimated result in the first step. Finally, the recognition algorithm is realized based on the Radial Basis Function Fuzzy Neural Network. Experiments have been conducted for real images. The results show this method can decrease the error recognition rate of Chinese license plate recognition.

  7. Interactions between binocular rivalry and Gestalt formation.

    NARCIS (Netherlands)

    Weert, C.M.M. de; Snoeren, P.R.; Koning, A.R.

    2005-01-01

    A question raised a long time ago in binocular rivalry research is whether the phenomenon of binocular rivalry is purely determined by local stimulus properties or that global stimulus properties also play a role. More specifically: do coherent features in a stimulus influence rivalrous behavior?

  8. Stereo Painting Display Devices

    Science.gov (United States)

    Shafer, David

    1982-06-01

    The Spanish Surrealist artist Salvador Dali has recently perfected the art of producing two paintings which are stereo pairs. Each painting is separately quite remarkable, presenting a subject with the vivid realism and clarity for which Dali is famous. Due to the surrealistic themes of Dali's art, however, the subjects preser.ted with such naturalism only exist in his imagination. Despite this considerable obstacle to producing stereo art, Dali has managed to paint stereo pairs that display subtle differences of coloring and lighting, in addition to the essential perspective differences. These stereo paintings require a display method that will allow the viewer to experience stereo fusion, but which will not degrade the high quality of the art work. This paper gives a review of several display methods that seem promising in terms of economy, size, adjustability, and image quality.

  9. On the contribution of binocular disparity to the long-term memory for natural scenes.

    Directory of Open Access Journals (Sweden)

    Matteo Valsecchi

    Full Text Available Binocular disparity is a fundamental dimension defining the input we receive from the visual world, along with luminance and chromaticity. In a memory task involving images of natural scenes we investigate whether binocular disparity enhances long-term visual memory. We found that forest images studied in the presence of disparity for relatively long times (7s were remembered better as compared to 2D presentation. This enhancement was not evident for other categories of pictures, such as images containing cars and houses, which are mostly identified by the presence of distinctive artifacts rather than by their spatial layout. Evidence from a further experiment indicates that observers do not retain a trace of stereo presentation in long-term memory.

  10. The Role of Binocular Disparity in Rapid Scene and Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Matteo Valsecchi

    2013-04-01

    Full Text Available We investigated the contribution of binocular disparity to the rapid recognition of scenes and simpler spatial patterns using a paradigm combining backward masked stimulus presentation and short-term match-to-sample recognition. First, we showed that binocular disparity did not contribute significantly to the recognition of briefly presented natural and artificial scenes, even when the availability of monocular cues was reduced. Subsequently, using dense random dot stereograms as stimuli, we showed that observers were in principle able to extract spatial patterns defined only by disparity under brief, masked presentations. Comparing our results with the predictions from a cue-summation model, we showed that combining disparity with luminance did not per se disrupt the processing of disparity. Our results suggest that the rapid recognition of scenes is mediated mostly by a monocular comparison of the images, although we can rely on stereo in fast pattern recognition.

  11. Analysis on detection accuracy of binocular photoelectric instrument optical axis parallelism digital calibration instrument

    Science.gov (United States)

    Ying, Jia-ju; Yin, Jian-ling; Wu, Dong-sheng; Liu, Jie; Chen, Yu-dan

    2017-11-01

    Low-light level night vision device and thermal infrared imaging binocular photoelectric instrument are used widely. The maladjustment of binocular instrument ocular axises parallelism will cause the observer the symptom such as dizziness, nausea, when use for a long time. Binocular photoelectric equipment digital calibration instrument is developed for detecting ocular axises parallelism. And the quantitative value of optical axis deviation can be quantitatively measured. As a testing instrument, the precision must be much higher than the standard of test instrument. Analyzes the factors that influence the accuracy of detection. Factors exist in each testing process link which affect the precision of the detecting instrument. They can be divided into two categories, one category is factors which directly affect the position of reticle image, the other category is factors which affect the calculation the center of reticle image. And the Synthesize error is calculated out. And further distribute the errors reasonably to ensure the accuracy of calibration instruments.

  12. JAVA Stereo Display Toolkit

    Science.gov (United States)

    Edmonds, Karina

    2008-01-01

    This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.

  13. Monocular Stereo Measurement Using High-Speed Catadioptric Tracking.

    Science.gov (United States)

    Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishi, Idaku

    2017-08-09

    This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512 × 512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system.

  14. Changes in the functional binocular status of older children and adults with previously untreated infantile esotropia following late surgical realignment.

    Science.gov (United States)

    Murray, Anthony David Neil; Orpen, Jane; Calcutt, Carolyn

    2007-04-01

    Most studies of infantile esotropia concern patients diagnosed in infancy and treated throughout childhood. This prospective study addresses changes in the functional binocular status of older children and adults with previously untreated infantile esotropia, following late surgical realignment. Seventeen patients aged 8 years or more with a history of untreated esotropia occurring within the first 6 months of life were included in this study. All had monocular optokinetic asymmetry, a visual acuity of 20/30 or better in the worse eye, and binocular function assesment preoperatively and postoperatively. All were surgically aligned within 8(Delta) of orthotropia. None had neurologic disease. Preoperatively, all 17 patients demonstrated a monocular response to Bagolini lenses, while postoperatively 15 (88%) of the 17 demonstrated binocular function with Bagolini lenses (in that they could constantly perceive the major part of both arms of the X generated by the Bagolini lenses) and 13/17 (76%) demonstrated an increase in the binocular field. All 17 had no sensory fusion, either preoperatively or postoperatively, when tested with the Worth 4-Dot test or synoptophore, and no stereopsis with the Titmus stereo test. Older children and adults with previously untreated infantile esotropia derive some functional benefits following late surgical realignment. The degree of binocular function may be lower than that achieved in patients aligned before 24 months of age.

  15. Augmented reality glass-free three-dimensional display with the stereo camera

    Science.gov (United States)

    Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.

  16. Stereo vision by self-organization.

    Science.gov (United States)

    Reimann, D; Haken, H

    1994-01-01

    We propose a new algorithm for stereoscopic depth perception, where the depth map is the momentary state of a dynamic process. To each image point we assign a set of possible disparity values. In a dynamic process with competition and cooperation, the correct disparity value is selected for each image point. Therefore, we solve the correspondence problem by a dynamic, self-organizing process, the structure of which shows analogies to the human visual system. The algorithm can be implemented in a massive parallel manner and yields good results for either artificial or natural images.

  17. A computational theory of human stereo vision.

    Science.gov (United States)

    Marr, D; Poggio, T

    1979-05-23

    An algorithm is proposed for solving the stereoscopic matching problem. The algorithm consists of five steps: (1) Each image is filtered at different orientations with bar masks of four sizes that increase with eccentricity; the equivalent filters are one or two octaves wide. (2) Zero-crossings in the filtered images, which roughly correspond to edges, are localized. Positions of the ends of lines and edges are also found. (3) For each mask orientation and size, matching takes place between pairs of zero-crossings or terminationss of the same sign in the two images, for a range of disparities up to about the width of the mask's central region. (4) Wide masks can control vergence movements, thus causing small masks to come into correspondence. (5) When a correspondence is achieved, it is stored in a dynamic buffer, called the 2 1/2-D sketch. It is shown that this proposal provides a theoretical framework for most existing psychophysical and neurophysiological data about stereopsis. Several critical experimental predictions are also made, for instance about the size of Panum's area under various conditions. The results of such experiments would tell us whether, for example, cooperativity is necessary for the matching process.

  18. Parametric Coding of Stereo Audio

    Directory of Open Access Journals (Sweden)

    Erik Schuijers

    2005-06-01

    Full Text Available Parametric-stereo coding is a technique to efficiently code a stereo audio signal as a monaural signal plus a small amount of parametric overhead to describe the stereo image. The stereo properties are analyzed, encoded, and reinstated in a decoder according to spatial psychoacoustical principles. The monaural signal can be encoded using any (conventional audio coder. Experiments show that the parameterized description of spatial properties enables a highly efficient, high-quality stereo audio representation.

  19. Combining Motion-Induced Blindness with Binocular Rivalry

    Directory of Open Access Journals (Sweden)

    K Jaworska

    2011-04-01

    Full Text Available Motion-induced blindness (MIB and binocular rivalry (BR are examples of multistable phenomena in which our perception varies despite constant retinal input. It has been suggested that both phenomena are related and share a common underlying mechanism. We tried to determine whether experimental manipulations of the target dot and the mask systematically affect MIB and BR in an experimental paradigm that can elicit both phenomena. Eighteen observers fixated the center of a split-screen stereo display that consisted of a distracter mask and a superimposed target dot with different colour (isoluminant Red/Green in corresponding peripheral areas of the left and right eye. Observers reported perceived colour and disappearance of the target dot by pressing and releasing corresponding keys. In a within-subjects design the mask was presented in rivalry or not—with orthogonal drift in the left and right eye or with the same drift in both eyes. In control conditions the mask remained stationary. In addition, the size of the target dot was varied (small, medium, and large. Our results suggest that MIB measured by normalized frequency and duration of target disappearance and BR measured by normalized frequency and duration of colour reversals of the target were both affected by motion in the mask. Surprisingly, binocular rivalry in the mask had only a small effect on BR of the target and virtually no effect on MIB. The overall pattern of normalized MIB and BR measures, however, differed across experimental conditions. In conclusion, the results show some degree of dissociation between MIB and BR. Further analyses will inform whether or not the two phenomena occur independently of each other.

  20. Stereo Calibration and Rectification for Omnidirectional Multi-Camera Systems

    Directory of Open Access Journals (Sweden)

    Yanchang Wang

    2012-10-01

    Full Text Available Stereo vision has been studied for decades as a fundamental problem in the field of computer vision. In recent years, computer vision and image processing with a large field of view, especially using omnidirectional vision and panoramic images, has been receiving increasing attention. An important problem for stereo vision is calibration. Although various kinds of calibration methods for omnidirectional cameras are proposed, most of them are limited to calibrate catadioptric cameras or fish-eye cameras and cannot be applied directly to multi-camera systems. In this work, we propose an easy calibration method with closed-form initialization and iterative optimization for omnidirectional multi-camera systems. The method only requires image pairs of the 2D target plane in a few different views. A method based on the spherical camera model is also proposed for rectifying omnidirectional stereo pairs. Using real data captured by Ladybug3, we carry out some experiments, including stereo calibration, rectification and 3D reconstruction. Statistical analyses and comparisons of the experimental results are also presented. As the experimental results show, the calibration results are precise and the effect of rectification is promising.

  1. A study on the effect of different image centres on stereo triangulation accuracy

    CSIR Research Space (South Africa)

    De Villiers, J

    2015-11-01

    Full Text Available This paper evaluates the effect of mixing the distortion centre, principal point and arithmetic image centre on the distortion correction, focal length determination and resulting real-world stereo vision triangulation. A robotic arm is used...

  2. Acquisition of stereo panoramas for display in VR environments

    KAUST Repository

    Ainsworth, Richard A.

    2011-01-23

    Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods described here combine high-resolution photography with surround vision and full stereo view in an immersive environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented in all directions. The interocular distance, as seen from the viewer\\'s perspective, displaces the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking, even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo panoramas created with this acquisition and display technique can be applied without modification to a large array of VR devices having different screen arrangements and different VR libraries.

  3. Stereo and IMU-Assisted Visual Odometry for Small Robots

    Science.gov (United States)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  4. The effect of Bangerter filters on binocular function in observers with amblyopia.

    Science.gov (United States)

    Chen, Zidong; Li, Jinrong; Thompson, Benjamin; Deng, Daming; Yuan, Junpeng; Chan, Lily; Hess, Robert F; Yu, Minbin

    2014-10-28

    We assessed whether partial occlusion of the nonamblyopic eye with Bangerter filters can immediately reduce suppression and promote binocular summation of contrast in observers with amblyopia. In Experiment 1, suppression was measured for 22 observers (mean age, 20 years; range, 14-32 years; 10 females) with strabismic or anisometropic amblyopia and 10 controls using our previously established "balance point" protocol. Measurements were made at baseline and with 0.6-, 0.4-, and 0.2-strength Bangerter filters placed over the nonamblyopic/dominant eye. In Experiment 2, psychophysical measurements of contrast sensitivity were made under binocular and monocular viewing conditions for 25 observers with anisometropic amblyopia (mean age, 17 years; range, 11-28 years; 14 females) and 22 controls (mean age, 24 years; range, 22-27; 12 female). Measurements were made at baseline, and with 0.4- and 0.2-strength Bangerter filters placed over the nonamblyopic/dominant eye. Binocular summation ratios (BSRs) were calculated at baseline and with Bangerter filters in place. Experiment 1: Bangerter filters reduced suppression in observers with amblyopia and induced suppression in controls (P = 0.025). The 0.2-strength filter eliminated suppression in observers with amblyopia and this was not a visual acuity effect. Experiment 2: Bangerter filters were able to induce normal levels of binocular contrast summation in the group of observers with anisometropic amblyopia for a stimulus with a spatial frequency of 3 cycles per degree (cpd, P = 0.006). The filters reduced binocular summation in controls. Bangerter filters can immediately reduce suppression and promote binocular summation for mid/low spatial frequencies in observers with amblyopia. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  5. A quantitative measurement of binocular color fusion limit for different disparities

    Science.gov (United States)

    Chen, Zaiqing; Shi, Junsheng; Tai, Yonghan; Huang, Xiaoqiao; Yun, Lijun; Zhang, Chao

    2018-01-01

    Color asymmetry is a common phenomenon in stereoscopic display system, which can cause visual fatigue or visual discomfort. When the color difference between the left and right eyes exceeds a threshold value, named binocular color fusion limit, color rivalry is said to occur. The most important information brought by stereoscopic displays is the depth perception produced by the disparity. As the stereo pair stimuli are presented separately to both eyes with disparities and those two monocular stimuli differ in color but share an iso-luminance polarity, it is possible for stereopsis and color rivalry to coexist. In this paper, we conducted an experiment to measure the color fusion limit for different disparity levels. In particular, it examines how the magnitude and sign of disparity affect the binocular color fusion limit that yields a fused, stable stereoscopic percept. The binocular color fusion limit was measured at five levels of disparities: 0, +/-60, +/-120 arc minutes for a sample color point which was selected from the 1976 CIE u'v' chromaticity diagram. The experimental results showed that fusion limit for the sample point varied with the level and sign of disparity. It was an interesting result that the fusion limit increased as the disparity decreases at crossed disparity direction (sign -), but there is almost no big change at uncrossed disparity direction (sign +). We found that color fusion was more difficult to achieve at the crossed disparity direction than at the uncrossed disparity direction.

  6. Comparison of binocular through-focus visual acuity with monovision and a small aperture inlay

    Science.gov (United States)

    Schwarz, Christina; Manzanera, Silvestre; Prieto, Pedro M.; Fernández, Enrique J.; Artal, Pablo

    2014-01-01

    Corneal small aperture inlays provide extended depth of focus as a solution to presbyopia. As this procedure is becoming more popular, it is interesting to compare its performance with traditional approaches, such as monovision. Here, binocular visual acuity was measured as a function of object vergence in three subjects by using a binocular adaptive optics vision analyzer. Visual acuity was measured at two luminance levels (photopic and mesopic) under several optical conditions: 1) natural vision (4 mm pupils, best corrected distance vision), 2) pure-defocus monovision ( + 1.25 D add in the nondominant eye), 3) small aperture monovision (1.6 mm pupil in the nondominant eye), and 4) combined small aperture and defocus monovision (1.6 mm pupil and a + 0.75 D add in the nondominant eye). Visual simulations of a small aperture corneal inlay suggest that the device extends DOF as effectively as traditional monovision in photopic light, in both cases at the cost of binocular summation. However, individual factors, such as aperture centration or sensitivity to mesopic conditions should be considered to assure adequate visual outcomes. PMID:25360355

  7. Real-Time Dense Stereo for Intelligent Vehicles

    NARCIS (Netherlands)

    Gavrila, D.M.; Mark, W. van der

    2006-01-01

    Stereo vision is an attractive passive sensing technique for obtaining three-dimensional (3-D) measurements. Recent hardware advances have given rise to a new class of real-time dense disparity estimation algorithms. This paper examines their suitability for intelligent vehicle (IV) applications. In

  8. 3D Stereo Visualization for Mobile Robot Tele-Guide

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2006-01-01

    learning and decision performance. Works in the literature have demonstrated how stereo vision contributes to improve perception of some depth cues often for abstract tasks, while little can be found about the advantages of stereoscopic visualization in mobile robot tele-guide applications. This work...

  9. Insights from intermittent binocular rivalry and EEG

    Directory of Open Access Journals (Sweden)

    Michael A Pitts

    2011-09-01

    Full Text Available Novel stimulation and analytical approaches employed in EEG studies of ambiguous figures have recently been applied to binocular rivalry. The combination of intermittent stimulus presentation and EEG source imaging has begun to shed new light on the neural underpinnings of binocular rivalry. Here, we review the basics of the intermittent paradigm and highlight methodological issues important for interpreting previous results and designing future experiments. We then outline current analytical approaches, including EEG microstates, event-related potentials, and statistically-based source estimation, and propose a spatio-temporal model that integrates findings from several studies. Finally, we discuss the advantages and limitations of using binocular rivalry as a tool to investigate the neural basis of perceptual awareness.

  10. Nonlinear Dynamics of Multi-Channel Binocular Vision.

    Science.gov (United States)

    1985-12-13

    implementation as new types of parallel computers in artificial intelligence applications . -A large number of new predictions have also been made...thereby characterized these circuits for implementation s new types of parallel computers in artificial intelligence applications . A large number of new

  11. The Observation of Binocular Double Stars

    Science.gov (United States)

    Ropelewski, Mike; Argyle, R. W.

    The night sky presents a fascinating variety of double stars, ranging from wide, AQ1 optical pairs to close binary systems. A few doubles can be divided with the unaided eye, while a modest pair of binoculars will reveal many more; the study of double stars can be enjoyed by those who do not possess a large telescope or expensive equipment. There is a broad selection of binoculars on the market, so let us take a 10 look at those that might be suitable for this branch of astronomy.

  12. Vision models for 3D surfaces

    Science.gov (United States)

    Mitra, Sunanda

    1992-11-01

    Different approaches to computational stereo to represent human stereo vision have been developed over the past two decades. The Marr-Poggio theory of human stereo vision is probably the most widely accepted model of the human stereo vision. However, recently developed motion stereo models which use a sequence of images taken by either a moving camera or a moving object provide an alternative method of achieving multi-resolution matching without the use of Laplacian of Gaussian operators. While using image sequences, the baseline between two camera positions for a image pair is changed for the subsequent image pair so as to achieve different resolution for each image pair. Having different baselines also avoids the inherent occlusion problem in stereo vision models. The advantage of using multi-resolution images acquired by camera positioned at different baselines over those acquired by LOG operators is that one does not have to encounter spurious edges often created by zero-crossings in the LOG operated images. Therefore in designing a computer vision system, a motion stereo model is more appropriate than a stereo vision model. However, in some applications where only a stereo pair of images are available, recovery of 3D surfaces of natural scenes are possible in a computationally efficient manner by using cepstrum matching and regularization techniques. Section 2 of this paper describes a motion stereo model using multi-scale cepstrum matching for the detection of disparity between image pairs in a sequence of images and subsequent recovery of 3D surfaces from depth-map obtained by a non convergent triangulation technique. Section 3 presents a 3D surface recovery technique from a stereo pair using cepstrum matching for disparity detection and cubic B-splines for surface smoothing. Section 4 contains the results of 3D surface recovery using both of the techniques mentioned above. Section 5 discusses the merit of 2D cepstrum matching and cubic B

  13. Visual and binocular status in elementary school children with a reading problem.

    Science.gov (United States)

    Christian, Lisa W; Nandakumar, Krithika; Hrynchak, Patricia K; Irving, Elizabeth L

    2017-11-21

    This descriptive study provides a summary of the binocular anomalies seen in elementary school children identified with reading problems. A retrospective chart review of all children identified with reading problems and seen by the University of Waterloo, Optometry Clinic, from September 2012 to June 2013. Files of 121 children (mean age 8.6 years, range 6-14 years) were reviewed. No significant refractive error was found in 81% of children. Five and 8 children were identified as strabismic at distance and near respectively. Phoria test revealed 90% and 65% of patients had normal distance and near phoria. Near point of convergencia (NPC) was <5cm in 68% of children, and 77% had stereoacuity of ≤40seconds of arc. More than 50% of the children had normal fusional vergence ranges except for near positive fusional vergencce (base out) break (46%). Tests for accommodation showed 91% of children were normal for binocular facility, and approximately 70% of children had an expected accuracy of accommodation. Findings indicate that some children with an identified reading problem also present with abnormal binocular test results compared to published normal values. Further investigation should be performed to investigate the relationship between binocular vision function and reading performance. Crown Copyright © 2017. Published by Elsevier España, S.L.U. All rights reserved.

  14. The effect of image position on the Independent Components of natural binocular images.

    Science.gov (United States)

    Hunter, David W; Hibbard, Paul B

    2018-01-11

    Human visual performance degrades substantially as the angular distance from the fovea increases. This decrease in performance is found for both binocular and monocular vision. Although analysis of the statistics of natural images has provided significant insights into human visual processing, little research has focused on the statistical content of binocular images at eccentric angles. We applied Independent Component Analysis to rectangular image patches cut from locations within binocular images corresponding to different degrees of eccentricity. The distribution of components learned from the varying locations was examined to determine how these distributions varied across eccentricity. We found a general trend towards a broader spread of horizontal and vertical position disparity tunings in eccentric regions compared to the fovea, with the horizontal spread more pronounced than the vertical spread. Eccentric locations above the centroid show a strong bias towards far-tuned components, eccentric locations below the centroid show a strong bias towards near-tuned components. These distributions exhibit substantial similarities with physiological measurements in V1, however in common with previous research we also observe important differences, in particular distributions of binocular phase disparity which do not match physiology.

  15. Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms

    Directory of Open Access Journals (Sweden)

    Christos Ttofis

    2012-01-01

    Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.

  16. The STEREO Mission

    CERN Document Server

    2008-01-01

    The STEREO mission uses twin heliospheric orbiters to track solar disturbances from their initiation to 1 AU. This book documents the mission, its objectives, the spacecraft that execute it and the instruments that provide the measurements, both remote sensing and in situ. This mission promises to unlock many of the mysteries of how the Sun produces what has become to be known as space weather.

  17. Monocular and binocular mechanisms mediating flicker adaptation.

    Science.gov (United States)

    Zhuang, Xiaohua; Shevell, Steven K

    2015-12-01

    Flicker adaptation reduces subsequent temporal contrast sensitivity. Recent studies show that this adaptation likely results from neural changes in the magnocellular visual pathway, but whether this adaptation occurs at a monocular or a binocular level, or both, is unclear. Here, two experiments address this question. The first experiment exploits the observation that flicker adaptation is stronger at higher than lower temporal frequencies. Observers' two eyes adapted to 3Hz flicker with an incremental pulse at 1/4 duty cycle, either in-phase or out-of-phase in the two eyes. At the binocular level, the flicker rate was 6Hz in the out-of-phase condition if the two eyes' pulse trains sum. Similar sensitivity reduction was found in both phase conditions, as expected for independent monocular adapting mechanisms. The second experiment tested for interocular transfer of adaptation between eyes. Results showed that (1) flicker adaptation was strongest with adapting and test fields in only the same eye, (2) adaptation can be partially transferred interocularly with adaptation in only the opposite eye, and (3) adaptation was weakened when both eyes were adapted simultaneously at different contrasts, compared to test-eye adaptation alone. Taken together, the findings are consistent with mechanisms of flicker adaptation at both the monocular and binocular level. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Hearing symptoms personal stereos.

    Science.gov (United States)

    da Luz, Tiara Santos; Borja, Ana Lúcia Vieira de Freitas

    2012-04-01

     Practical and portable the personal stereos if had become almost indispensable accessories in the day the day. Studies disclose that the portable players of music can cause auditory damages in the long run for who hear music in high volume for a drawn out time.  to verify the prevalence of auditory symptoms in users of amplified players and to know its habits of use  Observational prospective study of transversal cut carried through in three institutions of education of the city of Salvador BA, being two of public net and one of the private net. 400 students had answered to the questionnaire, of both the sex, between 14 and 30 years that had related the habit to use personal stereos.  The symptoms most prevalent had been hyperacusis (43.5%), auricular fullness (30.5%) and humming (27.5), being that the humming is the symptom most present in the population youngest. How much to the daily habits: 62.3% frequent use, 57% in raised intensities, 34% in drawn out periods. An inverse relation between exposition time was verified and the band of age (p = 0,000) and direct with the prevalence of the humming.  Although to admit to have knowledge on the damages that the exposition the sound of high intensity can cause the hearing, the daily habits of the young evidence the inadequate use of the portable stereos characterized by long periods of exposition, raised intensities, frequent use and preference for the insertion phones. The high prevalence of symptoms after the use suggests a bigger risk for the hearing of these young.

  19. Hearing symptoms personal stereos

    Directory of Open Access Journals (Sweden)

    Tiara Santos da Luz1

    2012-01-01

    Full Text Available Introduction: Practical and portable the personal stereos if had become almost indispensable accessories in the day the day. Studies disclose that the portable players of music can cause auditory damages in the long run for who hear music in high volume for a drawn out time. Objective: to verify the prevalence of auditory symptoms in users of amplified players and to know its habits of use. Method: Observational prospective study of transversal cut carried through in three institutions of education of the city of Salvador BA, being two of public net and one of the private net. 400 students had answered to the questionnaire, of both the sex, between 14 and 30 years that had related the habit to use personal stereos. Results: The symptoms most prevalent had been hyperacusis (43.5%, auricular fullness (30.5% and humming (27.5, being that the humming is the symptom most present in the population youngest. How much to the daily habits: 62.3% frequent use, 57% in raised intensities, 34% in drawn out periods. An inverse relation between exposition time was verified and the band of age (p=0,000 and direct with the prevalence of the humming. Conclusion: Although to admit to have knowledge on the damages that the exposition the sound of high intensity can cause the hearing, the daily habits of the young evidence the inadequate use of the portable stereos characterized by long periods of exposition, raised intensities, frequent use and preference for the insertion phones. The high prevalence of symptoms after the use suggests a bigger risk for the hearing of these young.

  20. Stereo-panoramic Data

    KAUST Repository

    Cutchin, Steve

    2013-03-07

    Systems and methods for automatically generating three-dimensional panoramic images for use in various virtual reality settings are disclosed. One embodiment of the system includes a stereo camera capture device (SCD), a programmable camera controller (PCC) that rotates, orients, and controls the SCD, a robotic maneuvering platform (RMP), and a path and adaptation controller (PAC). In that embodiment, the PAC determines the movement of the system based on an original desired path and input gathered from the SCD during an image capture process.

  1. Vision defects in albinism.

    Science.gov (United States)

    Pérez-Carpinell, J; Capilla, P; Illueca, C; Morales, J

    1992-08-01

    We have examined the possible presence of color vision anomalies in 9 individuals (17 eyes, 1 blind) with fundus findings suggesting ocular albinism using the Ishihara plates, the 28-hue Roth test, and the Davico anomaloscope. Results indicate that four of these individuals show no sign of the anomalies expected in an albino in either of the two eyes. Of the remaining cases, two are simple deuteranomals in both eyes, according to Pickford's classification criteria. The rest have protanomaly; however, in these the deviation toward red appears in both eyes in only one subject, whereas in the other two subjects it appears in only one eye, their binocular color vision being basically normal. Our study shows that a large proportion of these albinos have photophobia, pendular nystagmus, strabismus, noticeable refractive errors (astigmatism and high myopia), and poor visual acuity [usually less than 6/30 (20/100) with correction]. The measurement of contrast sensitivity function (CSF) indicates that the frequency of 12 cpd cannot be perceived, even in binocular vision.

  2. Target Image Matching Algorithm Based on Binocular CCD Ranging

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2014-01-01

    Full Text Available This paper proposed target image in a subpixel level matching algorithm for binocular CCD ranging, which is based on the principle of binocular CCD ranging. In the paper, firstly, we introduced the ranging principle of the binocular ranging system and deduced a binocular parallax formula. Secondly, we deduced the algorithm which was named improved cross-correlation matching algorithm and cubic surface fitting algorithm for target images matched, and it could achieve a subpixel level matching for binocular CCD ranging images. Lastly, through experiment we have analyzed and verified the actual CCD ranging images, then analyzed the errors of the experimental results and corrected the formula of calculating system errors. Experimental results showed that the actual measurement accuracy of a target within 3 km was higher than 0.52%, which meet the accuracy requirements of the high precision binocular ranging.

  3. Automated Detection of Ocular Alignment with Binocular Retinal Birefringence Scanning

    Science.gov (United States)

    Hunter, David G.; Shah, Ankoor S.; Sau, Soma; Nassif, Deborah; Guyton, David L.

    2003-06-01

    We previously developed a retinal birefringence scanning (RBS) device to detect eye fixation. The purpose of this study was to determine whether a new binocular RBS (BRBS) instrument can detect simultaneous fixation of both eyes. Control (nonmyopic and myopic) and strabismic subjects were studied by use of BRBS at a fixation distance of 45 cm. Binocularity (the percentage of measurements with bilateral fixation) was determined from the BRBS output. All nonstrabismic subjects with good quality signals had binocularity >75%. Binocularity averaged 5% in four subjects with strabismus (range of 0 -20%). BRBS may potentially be used to screen individuals for abnormal eye alignment.

  4. Towards A Real Time Implementation Of The Marr And Poggio Stereo Matcher

    Science.gov (United States)

    Nishihara, H. K.; Larson, N. G.

    1981-11-01

    This paper reports on research--primarily at Marr and Poggio's [9] mechanism level--to design a practical hardware stereo-matcher and on the interaction this study has had with our understanding of the problem, at the computational theory and algorithm levels. The stereo-matching algorithm proposed by Marr and Poggio [10] and implemented by Grimson and Marc [3] is consistent with what is presently known about human stereo vision [2]. Their research has been concerned with understanding the principles underlying the stereo-matching problem. Our objective has been to produce a stereo-matcher that operates reliably at near real time rates as a tool to facilitate further research in vision and for possible application in robotics and stereo-photogrammetry. At present the design and construction of the camera and convolution modules of this project have been completed and the design of the zero-crossing and matching modules is progressing. The remainder of this section provides a brief description of the Marr and Poggio stereo algorithm. We then dis-cuss our general approach and sonic of the issues that have come up concerning the design of the individual modules.

  5. Binocular treatment of amblyopia using videogames (BRAVO): study protocol for a randomised controlled trial.

    Science.gov (United States)

    Guo, Cindy X; Babu, Raiju J; Black, Joanna M; Bobier, William R; Lam, Carly S Y; Dai, Shuan; Gao, Tina Y; Hess, Robert F; Jenkins, Michelle; Jiang, Yannan; Kowal, Lionel; Parag, Varsha; South, Jayshree; Staffieri, Sandra Elfride; Walker, Natalie; Wadham, Angela; Thompson, Benjamin

    2016-10-18

    Amblyopia is a common neurodevelopmental disorder of vision that is characterised by visual impairment in one eye and compromised binocular visual function. Existing evidence-based treatments for children include patching the nonamblyopic eye to encourage use of the amblyopic eye. Currently there are no widely accepted treatments available for adults with amblyopia. The aim of this trial is to assess the efficacy of a new binocular, videogame-based treatment for amblyopia in older children and adults. We hypothesise that binocular treatment will significantly improve amblyopic eye visual acuity relative to placebo treatment. The BRAVO study is a double-blind, randomised, placebo-controlled multicentre trial to assess the effectiveness of a novel videogame-based binocular treatment for amblyopia. One hundred and eight participants aged 7 years or older with anisometropic and/or strabismic amblyopia (defined as ≥0.2 LogMAR interocular visual acuity difference, ≥0.3 LogMAR amblyopic eye visual acuity and no ocular disease) will be recruited via ophthalmologists, optometrists, clinical record searches and public advertisements at five sites in New Zealand, Canada, Hong Kong and Australia. Eligible participants will be randomised by computer in a 1:1 ratio, with stratification by age group: 7-12, 13-17 and 18 years and older. Participants will be randomised to receive 6 weeks of active or placebo home-based binocular treatment. Treatment will be in the form of a modified interactive falling-blocks game, implemented on a 5th generation iPod touch device viewed through red/green anaglyphic glasses. Participants and those assessing outcomes will be blinded to group assignment. The primary outcome is the change in best-corrected distance visual acuity in the amblyopic eye from baseline to 6 weeks post randomisation. Secondary outcomes include distance and near visual acuity, stereopsis, interocular suppression, angle of strabismus (where applicable) measured at

  6. Dynamical version--vergence interactions for a binocular implementation of Donders' law.

    Science.gov (United States)

    Minken, A W; Van Gisbergen, J A

    1996-03-01

    Recent investigations of the three-dimensional (3D) binocular eye positions in near vision have shown that a full characterization of vergence requires incorporation of its torsional component. The latter has a proportional relationship with horizontal vergence and elevation, causing the eyes to have intorsion in near upgaze but extorsion in near downgaze. In this study, we focus on the dynamical implementation of the torsional vergence component in both pure vergence and combined direction-depth binocular eye movements. We report on experiments in five subjects whose eye movements were recorded binocularly with the 3D magnetic search-coil technique. In pure vergence movements at a given elevation, torsional vergence increased with almost the same time course as horizontal vergence. In addition, the dynamic relationships among torsional vergence, horizontal vergence and elevation were close to static results in all subjects. In combined direction-depth movements a similar relationship held for the complete movements, but we could not firmly establish a straight-line relationship during the saccadic portion of the movement. Possible factors determining these responses are discussed. We computed the angular velocity profiles of pure vergence movements to see how tilting of the vergence angular velocity axis relative to Listing's plane generates torsional vergence. It is widely held that both saccadic and vergence movements are controlled by dedicated pulse generators specifying velocity signals. Little thought has been given to the question of how these controllers can be coordinated to yield realistic eye movements in 3D. Our finding that this tilt was close to full-angle, suggests a model in which version and vergence velocity signals are combined before the 3D neural integrator proposed by Tweed and Vilis. The implications of this scheme for the control of binocular eye movements in three dimensions are discussed, along with possible neural correlates.

  7. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    Directory of Open Access Journals (Sweden)

    Liang Lu

    2018-03-01

    Full Text Available Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.

  8. Robot Object Manipulation Using Stereoscopic Vision and Conformal Geometric Algebra

    Directory of Open Access Journals (Sweden)

    Julio Zamora-Esquivel

    2011-01-01

    Full Text Available This paper uses geometric algebra to formulate, in a single framework, the kinematics of a three finger robotic hand, a binocular robotic head, and the interactions between 3D objects, all of which are seen in stereo images. The main objective is the formulation of a kinematic control law to close the loop between perception and actions, which allows to perform a smooth visually guided object manipulation.

  9. Binocular cyclotorsion in superior vestibular neuritis.

    Science.gov (United States)

    Lapenna, R; Pellegrino, A; Ricci, G; Cagini, C; Faralli, M

    2017-11-30

    Conjugated cyclotorsion of the eyes toward the affected side can commonly be observed in vestibular neuritis. The aim of this study was to assess the differences in cyclotorsion between the ipsi- and contralesional eye during selective involvement of the superior branch of the vestibular nerve. We studied binocular cyclotorsion through ocular fundus photographs in 10 patients affected by acute superior vestibular neuritis (SVN). Cyclotorsion was also studied in 20 normal subjects. All SVN patients showed an ipsilesional cycloversion of the eyes. Normal subjects exhibited a constant mild excyclovergence (6.42 ± 2.34°). In SVN patients, contralateral incyclotorsion (8.4 ± 8.14°) was lower and not normally distributed compared to ipsilateral eye excyclotorsion (17.9 ± 4.36°) with no correlation between them. The interocular difference in cyclodeviation could be related to the starting physiological excyclovergence, to different tonic effects on the extraocular muscles of the two eyes and to the different influence of spontaneous nystagmus on cyclodeviation in the two eyes. We recommend referring only to ipsilateral excyclotorsion in the evaluation of utricular function during SVN and its subsequent compensation. Further studies are required to determine the binocular cyclotorsion in the case of other kinds of selective involvement of the vestibular nerve. © Copyright by Società Italiana di Otorinolaringologia e Chirurgia Cervico-Facciale.

  10. The Binocular Advantage in Visuomotor Tasks Involving Tools

    Directory of Open Access Journals (Sweden)

    Jenny C. A. Read

    2013-04-01

    Full Text Available We compared performance on three manual-dexterity tasks under monocular and binocular viewing. The tasks were the standard Morrisby Fine Dexterity Test, using forceps to manipulate the items, a modified version of the Morrisby test using fingers, and a “buzz-wire” task in which subjects had to guide a wire hoop around a 3D track without bringing the hoop into contact with the track. In all three tasks, performance was better for binocular viewing. The extent of the binocular advantage in individuals did not correlate significantly with their stereoacuity measured on the Randot test. However, the extent of the binocular advantage depended strongly on the task. It was weak when fingers were used on the Morrisby task, stronger with forceps, and extremely strong on the buzz-wire task (fivefold increase in error rate with monocular viewing. We suggest that the 3D buzz-wire game is particularly suitable for assessing binocularly based dexterity.

  11. Interactive stereo electron microscopy enhanced with virtual reality

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E.Wes; Bastacky, S.Jacob; Schwartz, Kenneth S.

    2001-12-17

    An analytical system is presented that is used to take measurements of objects perceived in stereo image pairs obtained from a scanning electron microscope (SEM). Our system operates by presenting a single stereo view that contains stereo image data obtained from the SEM, along with geometric representations of two types of virtual measurement instruments, a ''protractor'' and a ''caliper''. The measurements obtained from this system are an integral part of a medical study evaluating surfactant, a liquid coating the inner surface of the lung which makes possible the process of breathing. Measurements of the curvature and contact angle of submicron diameter droplets of a fluorocarbon deposited on the surface of airways are performed in order to determine surface tension of the air/liquid interface. This approach has been extended to a microscopic level from the techniques of traditional surface science by measuring submicrometer rather than millimeter diameter droplets, as well as the lengths and curvature of cilia responsible for movement of the surfactant, the airway's protective liquid blanket. An earlier implementation of this approach for taking angle measurements from objects perceived in stereo image pairs using a virtual protractor is extended in this paper to include distance measurements and to use a unified view model. The system is built around a unified view model that is derived from microscope-specific parameters, such as focal length, visible area and magnification. The unified view model ensures that the underlying view models and resultant binocular parallax cues are consistent between synthetic and acquired imagery. When the view models are consistent, it is possible to take measurements of features that are not constrained to lie within the projection plane. The system is first calibrated using non-clinical data of known size and resolution. Using the SEM, stereo image pairs of grids and spheres of

  12. How Simultaneous is the Perception of Binocular Depth and Rivalry in Plaid Stimuli?

    Directory of Open Access Journals (Sweden)

    Athena Buckthought

    2012-06-01

    Full Text Available Psychophysical experiments have demonstrated that it is possible to perceive both binocular depth and rivalry in plaids (Buckthought and Wilson 2007, Vision Research 47 2543–2556. In a recent study, we investigated the neural substrates for depth and rivalry processing with these plaid patterns, when either a depth or rivalry task was performed (Buckthought and Mendola 2011, Journal of Vision 11 1–15. However, the extent to which perception of the two stimulus aspects was truly simultaneous remained somewhat unclear. In the present study, we introduced a new task in which subjects were instructed to perform both depth and rivalry tasks concurrently. Subjects were clearly able to perform both tasks at the same time, but with a modest, symmetric drop in performance when compared to either task carried out alone. Subjects were also able to raise performance levels for either task by performing it with a higher priority, with a decline in performance for the other task. The symmetric declines in performance are consistent with the interpretation that the two tasks are equally demanding of attention (Braun and Julesz 1998, Perception & Psychophysics 60 1–23. The results demonstrate the impressive combination of binocular features that supports coincident depth and rivalry in surface perception, within the constraints of presumed orientation and spatial frequency channels.

  13. Assessing Attention Deficit by Binocular Rivalry.

    Science.gov (United States)

    Amador-Campos, Juan Antonio; Aznar-Casanova, J Antonio; Ortiz-Guerra, Juan Jairo; Moreno-Sánchez, Manuel; Medina-Peña, Antonio

    2015-12-01

    To determine whether the frequency and duration of the periods of suppression of a percept in a binocular rivalry (BR) task can be used to distinguish between participants with ADHD and controls. A total of 122 participants (6-15 years) were assigned to three groups: ADHD-Combined (ADHD-C), ADHD-Predominantly Inattentive (ADHD-I), and controls. They each performed a BR task and two measures were recorded: alternation rate and duration of exclusive dominance periods. ADHD-C group presented fewer alternations and showed greater variability than did the control group; results for the ADHD-I group being intermediate between the two. The duration of dominance periods showed a differential profile: In control group, it remained stable over time, whereas in the clinical groups, it decreased logarithmically as the task progressed. The differences between groups in relation to the BR indicators can be attributed to the activity of involuntary inhibition. © The Author(s) 2013.

  14. Using Fuzzy Logic to Enhance Stereo Matching in Multiresolution Images

    Directory of Open Access Journals (Sweden)

    Marcos D. Medeiros

    2010-01-01

    Full Text Available Stereo matching is an open problem in Computer Vision, for which local features are extracted to identify corresponding points in pairs of images. The results are heavily dependent on the initial steps. We apply image decomposition in multiresolution levels, for reducing the search space, computational time, and errors. We propose a solution to the problem of how deep (coarse should the stereo measures start, trading between error minimization and time consumption, by starting stereo calculation at varying resolution levels, for each pixel, according to fuzzy decisions. Our heuristic enhances the overall execution time since it only employs deeper resolution levels when strictly necessary. It also reduces errors because it measures similarity between windows with enough details. We also compare our algorithm with a very fast multi-resolution approach, and one based on fuzzy logic. Our algorithm performs faster and/or better than all those approaches, becoming, thus, a good candidate for robotic vision applications. We also discuss the system architecture that efficiently implements our solution.

  15. Neural disparity computation from IKONOS stereo imagery in the presence of occlusions

    Science.gov (United States)

    Binaghi, E.; Gallo, I.; Baraldi, A.; Gerhardinger, A.

    2006-09-01

    In computer vision, stereoscopic image analysis is a well-known technique capable of extracting the third (vertical) dimension. Starting from this knowledge, the Remote Sensing (RS) community has spent increasing efforts on the exploitation of Ikonos one-meter resolution stereo imagery for high accuracy 3D surface modelling and elevation data extraction. In previous works our team investigated the potential of neural adaptive learning to solve the correspondence problem in the presence of occlusions. In this paper we present an experimental evaluation of an improved version of the neural based stereo matching method when applied to Ikonos one-meter resolution stereo images affected by occlusion problems. Disparity maps generated with the proposed approach are compared with those obtained by an alternative stereo matching algorithm implemented in a (non-)commercial image processing software toolbox. To compare competing disparity maps, quality metrics recommended by the evaluation methodology proposed by Scharstein and Szelinski (2002, IJCV, 47, 7-42) are adopted.

  16. A buyer's and user's guide to astronomical telescopes & binoculars

    CERN Document Server

    Mullaney, James

    2007-01-01

    This exciting, upbeat new guide provides an extensive overview of binoculars and telescopes. It includes detailed up-to-date information on sources, selection and use of virtually every major type, brand and model of such instruments on today's market.

  17. Stereo Information in Micromegas Detectors

    CERN Document Server

    The ATLAS collaboration

    2015-01-01

    The New Small Wheel of the ATLAS experiment layout foresees eight micromegas detection layers. Some of them will feature stereo strips designed to measure both the precision and the second coordinate. In this note we describe the principle of reconstructing a space point using stereo information obtained from two micromegas detector layers rotated by a known angle. Furthermore, an error analysis is carried out to correlate the precision and the second coordinate resolution with the corresponding rotated micromegas layers resolution. We analyze and examine two different cases in order to find the optimum layout for the muon spectrometer needs.

  18. The iPod binocular home-based treatment for amblyopia in adults: efficacy and compliance.

    Science.gov (United States)

    Hess, Robert F; Babu, Raiju Jacob; Clavagnier, Simon; Black, Joanna; Bobier, William; Thompson, Benjamin

    2014-09-01

    Occlusion therapy for amblyopia is predicated on the idea that amblyopia is primarily a disorder of monocular vision; however, there is growing evidence that patients with amblyopia have a structurally intact binocular visual system that is rendered functionally monocular due to suppression. Furthermore, we have found that a dichoptic treatment intervention designed to directly target suppression can result in clinically significant improvement in both binocular and monocular visual function in adult patients with amblyopia. The fact that monocular improvement occurs in the absence of any fellow eye occlusion suggests that amblyopia is, in part, due to chronic suppression. Previously the treatment has been administered as a psychophysical task and more recently as a video game that can be played on video goggles or an iPod device equipped with a lenticular screen. The aim of this case-series study of 14 amblyopes (six strabismics, six anisometropes and two mixed) ages 13 to 50 years was to investigate: 1. whether the portable video game treatment is suitable for at-home use and 2. whether an anaglyphic version of the iPod-based video game, which is more convenient for at-home use, has comparable effects to the lenticular version. The dichoptic video game treatment was conducted at home and visual functions assessed before and after treatment. We found that at-home use for 10 to 30 hours restored simultaneous binocular perception in 13 of 14 cases along with significant improvements in acuity (0.11 ± 0.08 logMAR) and stereopsis (0.6 ± 0.5 log units). Furthermore, the anaglyph and lenticular platforms were equally effective. In addition, the iPod devices were able to record a complete and accurate picture of treatment compliance. The home-based dichoptic iPod approach represents a viable treatment for adults with amblyopia. © 2014 The Authors. Clinical and Experimental Optometry © 2014 Optometrists Association Australia.

  19. Prevalence of remediable disability due to low vision among institutionalised elderly people.

    NARCIS (Netherlands)

    Winter, L.J. de; Hoyng, C.B.; Froeling, P.G.A.M.; Meulendijks, C.F.M.; Wilt, G.J. van der

    2004-01-01

    BACKGROUND: Prevalence of remediable visual disability among institutionalised elderly people, resulting from inappropriate use or non-use of low-vision aids, is reported to be high, but largely rests on anecdotal evidence. OBJECTIVE: To estimate the prevalence of binocular low vision and underlying

  20. Hearing damage by personal stereo

    DEFF Research Database (Denmark)

    Hammershøi, Dorte; Ordoñez, Rodrigo Pizarro; Reuter, Karen

    2006-01-01

    The technological development within personal stereo systems, such as MP3 players, iPods etc., has changed music listening habits from home entertainment to everyday and everywhere use. The technology has developed considerably, since the introduction of CD walkmen, and high-level low-distortion ......The technological development within personal stereo systems, such as MP3 players, iPods etc., has changed music listening habits from home entertainment to everyday and everywhere use. The technology has developed considerably, since the introduction of CD walkmen, and high-level low......-distortion music is produced by minimal devices. In this paper, the existing literature on effects of personal stereo systems is reviewed, incl. studies of exposure levels, and effects on hearing. Generally, it is found that the levels being used is of concern, which in one study [Acustica?Acta Acustica, 82 (1996......) 885?894] is demonstrated to relate to the specific use in situations with high levels of background noise. Another study [Med. J. Austr., 1998; 169: 588-592], demonstrates that the effect of personal stereo is comparable to that of being exposed to noise in industry. The results are discussed in view...

  1. Fundamentals of Presbyopia: visual processing and binocularity in its transformation.

    Science.gov (United States)

    Rozanova, Olga I; Shchuko, Andrey G; Mischenko, Tatyana S

    2018-01-01

    The accommodation has considerable interactions with the pupil response, vergence response and binocularity. The transformation of visual reception processing and the changes of the binocular cooperation during the presbyopia development are still poorly studied. So, the regularities of visual system violation in the presbyopia formation need to be characterized. This study aims to reveal the transformation of visual reception processing and to determine the role of disturbances in binocular interactions in presbyopia formation. This study included 60 people with emmetropic refraction, uncorrected distance visual acuity 1.0 or higher (decimal scale), normal color perception, without concomitant ophthalmopathology. The first group consisted of 30 people (from 18 to 27 years old) without presbyopia, the second cohort comprised 30 patients (from 45 to 55 years old) with presbyopia. The eyeball anatomy and optics were evaluated using ultrasound biomicroscopy, aberrometry, and pupillometry. The functional state of the visual system was investigated under monocular and binocular conditions. The registration of the disparate fusional reflex limits was performed by the original technic using a diploptic device which facilitated investigation of the binocular interaction under natural conditions without the accommodation response, but with the different vergence load. The disparate fusional reflex was analyzed using the proximal and distal fusion borders, and the convergence and divergence fusion borders. The calculation of the area of binocularity field was performed in cm 2 . The presbyopia formation is characterized by a change in an intraocular anatomy, optics, visual processing, and binocularity. The processes of binocular interaction inhibition make a significant contribution to the misalignment of the visual perception. The modification of the proximal, distal and convergence fusion borders was determined. It was revealed that 87% of the presbyopic patients had

  2. Binocular HMD for fixed-wing aircraft: a trade-off approach

    Science.gov (United States)

    Leger, Alain M.; Roumes, Corinne; Gardelle, C.; Cursolle, J. P.; Kraus, Jean-Marc

    1993-12-01

    From a physiological point of view, HMDs presenting an image on each eye are known to offer some advantages comparatively to monocular presentation. Besides the obvious fact that a binocular display provides more `natural' visual perception, it also prevents rivalry and improves several components of the visual function, such as perceptual threshold, contrast sensitivity, and visual acuity. Binocular vision is also a crucial element in depth perception, though its main characteristic, stereopsis, is not yet really used. However, these advantages must be paid by an increased technical complexity and added weight on the head, raising safety related concerns, but also comfort and operational (performance) issues, which imply several tradeoffs. An R&D program funded by the French MOD currently aims to build a night attack HMD for experimental flight tests. Human factor basic requirements were to achieve a head supported mass below 2 kg with minimum encumbrance and to project imagery and symbology on the helmet visor with a large Field of View. The optical and mechanical design was first optimized to allow a head/system resultant CG within the safety limits for ejection. Considering experimental results, a tradeoff is made favoring head mobility rather than seeking stability. Two miniature CRTs are used to display imagery coming either from IR, I2 or TV sources, while symbology is projected monocularly. Consideration of operational needs also implies several tradeoffs at this level.

  3. Optimization on shape curves with application to specular stereo

    KAUST Repository

    Balzer, Jonathan

    2010-01-01

    We state that a one-dimensional manifold of shapes in 3-space can be modeled by a level set function. Finding a minimizer of an independent functional among all points on such a shape curve has interesting applications in computer vision. It is shown how to replace the commonly encountered practice of gradient projection by a projection onto the curve itself. The outcome is an algorithm for constrained optimization, which, as we demonstrate theoretically and numerically, provides some important benefits in stereo reconstruction of specular surfaces. © 2010 Springer-Verlag.

  4. Advances in embedded computer vision

    CERN Document Server

    Kisacanin, Branislav

    2014-01-01

    This illuminating collection offers a fresh look at the very latest advances in the field of embedded computer vision. Emerging areas covered by this comprehensive text/reference include the embedded realization of 3D vision technologies for a variety of applications, such as stereo cameras on mobile devices. Recent trends towards the development of small unmanned aerial vehicles (UAVs) with embedded image and video processing algorithms are also examined. The authoritative insights range from historical perspectives to future developments, reviewing embedded implementation, tools, technolog

  5. An Active System for Visually-Guided Reaching in 3D across Binocular Fixations

    Directory of Open Access Journals (Sweden)

    Ester Martinez-Martin

    2014-01-01

    Full Text Available Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity generated from the egocentric representation of the visual information (image coordinates. In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching. The approach’s performance is evaluated through experiments on both simulated and real data.

  6. Binocular iPad treatment for amblyopia in preschool children.

    Science.gov (United States)

    Birch, Eileen E; Li, Simone L; Jost, Reed M; Morale, Sarah E; De La Cruz, Angie; Stager, David; Dao, Lori; Stager, David R

    2015-02-01

    Recent experimental evidence supports a role for binocular visual experience in the treatment of amblyopia. The purpose of this study was to determine whether repeated binocular visual experience with dichoptic iPad games could effectively treat amblyopia in preschool children. A total of 50 consecutive amblyopic preschool children 3-6.9 years of age were assigned to play sham iPad games (first 5 children) or binocular iPad games (n = 45) for at least 4 hours per week for 4 weeks. Thirty (67%) children in the binocular iPad group and 4 (80%) in the sham iPad group were also treated with patching at a different time of day. Visual acuity and stereoacuity were assessed at baseline, at 4 weeks, and at 3 months after the cessation of game play. The sham iPad group had no significant improvement in visual acuity (t4 = 0.34, P = 0.75). In the binocular iPad group, mean visual acuity (plus or minus standard error) improved from 0.43 ± 0.03 at baseline to 0.34 ± 0.03 logMAR at 4 weeks (n = 45; paired t44 = 4.93; P iPad games for ≥8 hours (≥50% compliance) had significantly more visual acuity improvement than children who played 0-4 hours (t43 = 4.21, P = 0.0001). Repeated binocular experience, provided by dichoptic iPad game play, was more effective than sham iPad game play as a treatment for amblyopia in preschool children. Copyright © 2015 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.

  7. Acute alcohol drinking promotes piecemeal percepts during binocular rivalry

    Directory of Open Access Journals (Sweden)

    Dingcai eCao

    2016-04-01

    Full Text Available Binocular rivalry refers to perceptual alternation when two eyes view different images. One of the potential percepts during binocular rivalry is a spatial mosaic of left- and right-eye images, known as piecemeal percepts, which may result from localized rivalries between small regions in the left- and right-eye images. It is known that alcohol increases inhibitory neurotransmission, which may reduce the number of alternations during binocular rivalry. However, it is unclear whether alcohol affects rivalry dynamics in the same manner for both coherent percepts (i.e. percepts of complete left or right images and piecemeal percepts. To address this question, the present study measured the dynamics of binocular rivalry before and after 15 moderate-to-heavy social drinkers consumed an intoxicating dose of alcohol versus a placebo beverage. Both simple rivalrous stimuli consisting of gratings with different orientations, and complex stimuli consisting of a face or a house were tested to examine alcohol effects on rivalry as a function of stimulus complexity. Results showed that for both simple and complex stimuli, alcohol affects coherent and piecemeal percepts differently. More specifically, alcohol reduced the number of coherent percepts but not the mean dominance duration of coherent percepts. In contrast, for piecemeal percepts, alcohol increased the mean dominance duration but not the number of piecemeal percepts. These results suggested that alcohol drinking may selectively affect the dynamics of transitional period of binocular rivalry by increasing the duration of piecemeal percepts, leading a reduction in the number of coherent percepts. The differential effect of alcohol on the dynamics of coherent and piecemeal percepts cannot be accounted for by alcohol’s effect on a common inhibitory mechanism. Other mechanisms, such as increasing neural noise, are needed to explain alcohol’s effect on the dynamics of binocular rivalry.

  8. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  9. Stereo Pinhole Camera: Assembly and experimental activities

    OpenAIRE

    Santos, Gilmário Barbosa; Departamento de Ciência da Computação, Universidade do Estado de Santa Catarina, Joinville; Cunha, Sidney Pinto; Centro de Tecnologia da Informação Renato Archer, Campinas

    2015-01-01

    This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Fur...

  10. Stereo Vision for SPHERES-based Navigation and Monitoring Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Maintenance operations and scientific research on the International Space Station (ISS) require active monitoring. Currently the majority of monitoring and recording...

  11. MRI and Stereo Vision Surface Reconstruction and Fusion

    NARCIS (Netherlands)

    El Chemaly, Trishia; Siepel, Françoise Jeanette; Rihana, Sandy; Groenhuis, Vincent; van der Heijden, Ferdinand; Stramigioli, Stefano

    2017-01-01

    Breast cancer, the most commonly diagnosed cancer in women worldwide, is mostly detected through a biopsy where tissue is extracted and chemically examined or pathologist assessed. Medical imaging plays a valuable role in targeting malignant tissue accurately and guiding the radiologist during

  12. Hardware Approach for Real Time Machine Stereo Vision

    Directory of Open Access Journals (Sweden)

    Michael Tornow

    2006-02-01

    Full Text Available Image processing is an effective tool for the analysis of optical sensor information for driver assistance systems and controlling of autonomous robots. Algorithms for image processing are often very complex and costly in terms of computation. In robotics and driver assistance systems, real-time processing is necessary. Signal processing algorithms must often be drastically modified so they can be implemented in the hardware. This task is especially difficult for continuous real-time processing at high speeds. This article describes a hardware-software co-design for a multi-object position sensor based on a stereophotogrammetric measuring method. In order to cover a large measuring area, an optimized algorithm based on an image pyramid is implemented in an FPGA as a parallel hardware solution for depth map calculation. Object recognition and tracking are then executed in real-time in a processor with help of software. For this task a statistical cluster method is used. Stabilization of the tracking is realized through use of a Kalman filter. Keywords: stereophotogrammetry, hardware-software co-design, FPGA, 3-d image analysis, real-time, clustering and tracking.

  13. Low Vision

    Science.gov (United States)

    ... Prevalence Rates for Low Vision by Age, and Race/Ethnicity Table for 2010 U.S. Age-Specific Prevalence ... Ethnicity 2010 Prevalence Rates of Low Vision by Race Table for 2010 Prevalence Rates of Low Vision ...

  14. Vision Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Vision Lab personnel perform research, development, testing and evaluation of eye protection and vision performance. The lab maintains and continues to develop...

  15. Quantitative perimetry under binocular viewing conditions in microstrabismus

    NARCIS (Netherlands)

    M.V. Joosse (Maurits); H.J. Simonsz (Huib); H.M. van Minderhout; P.T.V.M. de Jong (Paulus); B. Noordzij (Bastiaantje); P.G.H. Mulder (Paul)

    1997-01-01

    textabstractIn order to elucidate the type, size and depth of suppression scotomata in microstrabismus and small angle convergent strabismus, we performed binocular static perimetry in 14 subjects with strabismus and four normal observers. The strabismic cases had an objective angle of convergent

  16. Quantitative perimetry under binocular viewing conditions in microstrabismus

    NARCIS (Netherlands)

    Joosse, M. V.; Simonsz, H. J.; van Minderhout, H. M.; de Jong, P. T.; Noordzij, B.; Mulder, P. G.

    1997-01-01

    In order to elucidate the type, size and depth of suppression scotomata in microstrabismus and small angle convergent strabismus, we performed binocular static perimetry in 14 subjects with strabismus and four normal observers. The strabismic cases had an objective angle of convergent squint between

  17. Stereo visualization in the ground segment tasks of the science space missions

    Science.gov (United States)

    Korneva, Natalia; Nazarov, Vladimir; Mogilevsky, Mikhail; Nazirov, Ravil

    The ground segment is one of the key components of any science space mission. Its functionality substantially defines the scientific effectiveness of the experiment as a whole. And it should be noted that its outstanding feature (in contrast to the other information systems of the scientific space projects) is interaction between researcher and project information system in order to interpret data being obtained during experiments. Therefore the ability to visualize the data being processed is essential prerequisite for ground segment's software and the usage of modern technological solutions and approaches in this area will allow increasing science return in general and providing a framework for new experiments creation. Mostly for the visualization of data being processed 2D and 3D graphics are used that is caused by the traditional visualization tools capabilities. Besides that the stereo data visualization methods are used actively in solving some tasks. However their usage is usually limited to such tasks as visualization of virtual and augmented reality, remote sensing data processing and suchlike. Low prevalence of stereo visualization methods in solving science ground segment tasks is primarily explained by extremely high cost of the necessary hardware. But recently appeared low cost hardware solutions for stereo visualization based on the page-flip method of views separation. In this case it seems promising to use the stereo visualization as an instrument for investigation of a wide range of problems, mainly for stereo visualization of complex physical processes as well as mathematical abstractions and models. The article is concerned with an attempt to use this approach. It describes the details and problems of using stereo visualization (page-flip method based on NVIDIA 3D Vision Kit, graphic processor GeForce) for display of some datasets of magnetospheric satellite onboard measurements and also in development of the software for manual stereo matching.

  18. WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves

    Science.gov (United States)

    Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise

    2017-10-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.

  19. Monocular and binocular smooth pursuit in central field loss.

    Science.gov (United States)

    Shanidze, Natela; Heinen, Stephen; Verghese, Preeti

    2017-12-01

    Macular degeneration results in heterogeneous central field loss (CFL) and often has asymmetrical effects in the two eyes. As such, it is not clear to what degree the movements of the two eyes are coordinated. To address this issue, we examined smooth pursuit quantitatively in CFL participants during binocular viewing and compared it to the monocular viewing case. We also examined coordination of the two eyes during smooth pursuit and how this coordination was affected by interocular ratios of acuity and contrast, as well as CFL-specific interocular differences, such as scotoma sizes and degree of binocular overlap. We hypothesized that the coordination of eye movements would depend on the binocularity of the two eyes. To test our hypotheses, we used a modified step-ramp paradigm, and measured pursuit in both eyes while viewing was binocular, or monocular with the dominant or non-dominant eye. Data for CFL participants and age-matched controls were examined at the group, within-group, and individual levels. We found that CFL participants had a broader range of smooth pursuit gains and a significantly lower correlation between the two eyes, as compared to controls. Across both CFL and control groups, smooth pursuit gain and correlation between the eyes are best predicted by the ratio of contrast sensitivity between the eyes. For the subgroup of participants with measurable stereopsis, both smooth pursuit gain and correlation are best predicted by stereoacuity. Therefore, our results suggest that coordination between the eyes during smooth pursuit depends on binocular cooperation between the eyes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. WASS: an open-source stereo processing pipeline for sea waves 3D reconstruction

    Science.gov (United States)

    Bergamasco, Filippo; Benetazzo, Alvise; Torsello, Andrea; Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro

    2017-04-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community. In fact, recent advances of both computer vision algorithms and CPU processing power can now allow the study of the spatio-temporal wave fields with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner so that the implementation of a 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well-tested software package that automates the steps from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS, a completely Open-Source stereo processing pipeline for sea waves 3D reconstruction, available at http://www.dais.unive.it/wass/. Our tool completely automates the recovery of dense point clouds from stereo images by providing three main functionalities. First, WASS can automatically recover the extrinsic parameters of the stereo rig (up to scale) so that no delicate calibration has to be performed on the field. Second, WASS implements a fast 3D dense stereo reconstruction procedure so that an accurate 3D point cloud can be computed from each stereo pair. We rely on the well-consolidated OpenCV library both for the image stereo rectification and disparity map recovery. Lastly, a set of 2D and 3D filtering techniques both on the disparity map and the produced point cloud are implemented to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface (examples are sun-glares, large white-capped areas, fog and water areosol, etc). Developed to be as fast as possible, WASS

  1. Stereo Correspondence Using Moment Invariants

    Science.gov (United States)

    Premaratne, Prashan; Safaei, Farzad

    Autonomous navigation is seen as a vital tool in harnessing the enormous potential of Unmanned Aerial Vehicles (UAV) and small robotic vehicles for both military and civilian use. Even though, laser based scanning solutions for Simultaneous Location And Mapping (SLAM) is considered as the most reliable for depth estimation, they are not feasible for use in UAV and land-based small vehicles due to their physical size and weight. Stereovision is considered as the best approach for any autonomous navigation solution as stereo rigs are considered to be lightweight and inexpensive. However, stereoscopy which estimates the depth information through pairs of stereo images can still be computationally expensive and unreliable. This is mainly due to some of the algorithms used in successful stereovision solutions require high computational requirements that cannot be met by small robotic vehicles. In our research, we implement a feature-based stereovision solution using moment invariants as a metric to find corresponding regions in image pairs that will reduce the computational complexity and improve the accuracy of the disparity measures that will be significant for the use in UAVs and in small robotic vehicles.

  2. Improved Stereo Matching With Boosting Method

    Directory of Open Access Journals (Sweden)

    Shiny B

    2015-06-01

    Full Text Available Abstract This paper presents an approach based on classification for improving the accuracy of stereo matching methods. We propose this method for occlusion handling. This work employs classification of pixels for finding the erroneous disparity values. Due to the wide applications of disparity map in 3D television medical imaging etc the accuracy of disparity map has high significance. An initial disparity map is obtained using local or global stereo matching methods from the input stereo image pair. The various features for classification are computed from the input stereo image pair and the obtained disparity map. Then the computed feature vector is used for classification of pixels by using GentleBoost as the classification method. The erroneous disparity values in the disparity map found by classification are corrected through a completion stage or filling stage. A performance evaluation of stereo matching using AdaBoostM1 RUSBoost Neural networks and GentleBoost is performed.

  3. Stereo Visualization and Map Comprehension

    Science.gov (United States)

    Rapp, D. N.; Culpepper, S.; Kirkby, K.; Morin, P.

    2004-12-01

    In this experiment, we assessed the use of stereo visualizations as effective tools for topographic map learning. In most Earth Science courses, students spend extended time learning how to read topographic maps, relying on the lines of the map as indicators of height and accompanying distance. These maps often necessitate extended training for students to acquire an understanding of what they represent, how they are to be used, and the implementation of these maps to solve problems. In fact instructors often comment that students fail to adequately use such maps, instead relying on prior spatial knowledge or experiences which may be inappropriate for understanding topographic displays. We asked participants to study maps that provided 3-dimensional or 2-dimensional views, and then answer a battery of questions about features and processes associated with the maps. The results will be described with respect to the cognitive utility of visualizations as tools for map comprehension tasks.

  4. Video stereo-laparoscopy system

    Science.gov (United States)

    Xiang, Yang; Hu, Jiasheng; Jiang, Huilin

    2006-01-01

    Minimally invasive surgery (MIS) has contributed significantly to patient care by reducing the morbidity associated with more invasive procedures. MIS procedures have become standard treatment for gallbladder disease and some abdominal malignancies. The imaging system has played a major role in the evolving field of minimally invasive surgery (MIS). The image need to have good resolution, large magnification, especially, the image need to have depth cue at the same time the image have no flicker and suit brightness. The video stereo-laparoscopy system can meet the demand of the doctors. This paper introduces the 3d video laparoscopy has those characteristic, field frequency: 100Hz, the depth space: 150mm, resolution: 10pl/mm. The work principle of the system is introduced in detail, and the optical system and time-division stereo-display system are described briefly in this paper. The system has focusing image lens, it can image on the CCD chip, the optical signal can change the video signal, and through A/D switch of the image processing system become the digital signal, then display the polarized image on the screen of the monitor through the liquid crystal shutters. The doctors with the polarized glasses can watch the 3D image without flicker of the tissue or organ. The 3D video laparoscope system has apply in the MIS field and praised by doctors. Contrast to the traditional 2D video laparoscopy system, it has some merit such as reducing the time of surgery, reducing the problem of surgery and the trained time.

  5. The Scintillating Grid Illusion is Enhanced by Binocular Viewing

    Directory of Open Access Journals (Sweden)

    Jenny C. A. Read

    2012-12-01

    Full Text Available The scintillating grid illusion is an intriguing stimulus consisting of a grey grid on a black background, with white discs at the grid intersections. Most viewers perceive illusory “scintillating” black discs within the physical white discs, especially at non-fixated locations. Here, we report for the first time that this scintillation percept is stronger when the stimulus is viewed binocularly than when it is presented to only one eye. Further experiments indicate that this is not simply because two monocular percepts combine linearly, but involves a specifically cyclopean contribution (Schrauf & Spillmann, 2000. However, the scintillation percept does not depend on the absolute disparity of the stimulus relative to the screen. In an intriguing twist, although the basic illusion shows more scintillation when viewed binocularly, when the illusion is weakened by shifting the discs away from the grid intersections, scintillation becomes stronger with monocular viewing.

  6. Stereo Pinhole Camera: Assembly and experimental activities

    Directory of Open Access Journals (Sweden)

    Gilmário Barbosa Santos

    2015-05-01

    Full Text Available This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Furthermore, experiments are proposed by using the images obtained by the camera for 3D visualization through a pair of anaglyph glasses, and the estimation of relative depth by triangulation is discussed.

  7. Binocular Rivalry in a Competitive Neural Network with Synaptic Depression

    KAUST Repository

    Kilpatrick, Zachary P.

    2010-01-01

    We study binocular rivalry in a competitive neural network with synaptic depression. In particular, we consider two coupled hypercolums within primary visual cortex (V1), representing orientation selective cells responding to either left or right eye inputs. Coupling between hypercolumns is dominated by inhibition, especially for neurons with dissimilar orientation preferences. Within hypercolumns, recurrent connectivity is excitatory for similar orientations and inhibitory for different orientations. All synaptic connections are modifiable by local synaptic depression. When the hypercolumns are driven by orthogonal oriented stimuli, it is possible to induce oscillations that are representative of binocular rivalry. We first analyze the occurrence of oscillations in a space-clamped version of the model using a fast-slow analys is, taking advantage of the fact that depression evolves much slower than population activity. We th en analyze the onset of oscillations in the full spatially extended system by carrying out a piecewise smooth stability analysis of single (winner-take-all) and double (fusion) bumps within the network. Although our stability analysis takes into account only instabilities associated with real eigenvalues, it identifies points of instability that are consistent with what is found numerically. In particular, we show that, in regions of parameter space where double bumps are unstable and no single bumps exist, binocular rivalry can arise as a slow alternation between either population supporting a bump. © 2010 Society for Industrial and Applied Mathematics.

  8. Standard Test Method for Measuring Binocular Disparity in Transparent Parts

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method covers the amount of binocular disparity that is induced by transparent parts such as aircraft windscreens, canopies, HUD combining glasses, visors, or goggles. This test method may be applied to parts of any size, shape, or thickness, individually or in combination, so as to determine the contribution of each transparent part to the overall binocular disparity present in the total “viewing system” being used by a human operator. 1.2 This test method represents one of several techniques that are available for measuring binocular disparity, but is the only technique that yields a quantitative figure of merit that can be related to operator visual performance. 1.3 This test method employs apparatus currently being used in the measurement of optical angular deviation under Method F 801. 1.4 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not con...

  9. VISION development

    International Nuclear Information System (INIS)

    Hernandez, J.E.; Sherwood, R.J.; Whitman, S.R.

    1994-01-01

    VISION is a flexible and extensible object-oriented programming environment for prototyping computer-vision and pattern-recognition algorithms. This year's effort focused on three major areas: documentation, graphics, and support for new applications

  10. Artificial stereo presentation of meteorological data fields

    Science.gov (United States)

    Hasler, A. F.; Desjardins, M.; Negri, A. J.

    1981-01-01

    The innate capability to perceive three-dimensional stereo imagery has been exploited to present multidimensional meteorological data fields. Variations on an artificial stereo technique first discussed by Pichel et al. (1973) are used to display single and multispectral images in a vivid and easily assimilated manner. Examples of visible/infrared artificial stereo are given for Hurricane Allen and for severe thunderstorms on 10 April 1979. Three-dimensional output from a mesoscale model also is presented. The images may be viewed through the glasses inserted in the February 1981 issue of the Bulletin of the American Meteorological Society, with the red lens over the right eye. The images have been produced on the interactive Atmospheric and Oceanographic Information Processing System (AOIPS) at Goddard Space Flight Center. Stereo presentation is an important aid in understanding meteorological phenomena for operational weather forecasting, research case studies, and model simulations.

  11. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  12. Fault-Tolerant Vision for Vehicle Guidance in Agriculture

    DEFF Research Database (Denmark)

    Blas, Morten Rufus

    The emergence of widely available vision technologies is enabling for a wide range of automation tasks in industry and other areas. Agricultural vehicle guidance systems have benefitted from advances in 3D vision based on stereo camera technology. By automatically guiding vehicles along crops...... the field that is seen by the stereo camera, it is possible to support the guidance system by storing salient information about the environment. By tracking the motion of the vehicle, vision output can be fused over time to create more reliable and robust estimates of crop location. This thesis approaches...... in tracking vehicle motion using 3D vision is demonstrated to allow unprecedented high accuracy maps to be created of the local environment. Features in the environment are extracted and tracked using novel feature detectors relying on approximating the Laplacian operator with a bi-level octagonal kernel...

  13. Diopter Focus of ANVIS Eyepieces Using Monocular and Binocular Techniques

    National Research Council Canada - National Science Library

    Mclean, William

    2002-01-01

    U.S. Army aviators were asked to obtain best resolution using an ophthalmic phoropter with three different focusing techniques, viewing with unaided vision and through the aviator's night vision imaging system (ANVIS...

  14. STEREO interplanetary shocks and foreshocks

    Energy Technology Data Exchange (ETDEWEB)

    Blanco-Cano, X. [Instituto de Geofisica, UNAM, CU, Coyoacan 04510 DF (Mexico); Kajdic, P. [IRAP-University of Toulouse, CNRS, Toulouse (France); Aguilar-Rodriguez, E. [Instituto de Geofisica, UNAM, Morelia (Mexico); Russell, C. T. [ESS and IGPP, University of California, Los Angeles, 603 Charles Young Drive, Los Angeles, CA 90095 (United States); Jian, L. K. [NASA Goddard Space Flight Center, Greenbelt, MD and University of Maryland, College Park, MD (United States); Luhmann, J. G. [SSL, University of California Berkeley (United States)

    2013-06-13

    We use STEREO data to study shocks driven by stream interactions and the waves associated with them. During the years of the extended solar minimum 2007-2010, stream interaction shocks have Mach numbers between 1.1-3.8 and {theta}{sub Bn}{approx}20-86 Degree-Sign . We find a variety of waves, including whistlers and low frequency fluctuations. Upstream whistler waves may be generated at the shock and upstream ultra low frequency (ULF) waves can be driven locally by ion instabilities. The downstream wave spectra can be formed by both, locally generated perturbations, and shock transmitted waves. We find that many quasiperpendicular shocks can be accompanied by ULF wave and ion foreshocks, which is in contrast to Earth's bow shock. Fluctuations downstream of quasi-parallel shocks tend to have larger amplitudes than waves downstream of quasi-perpendicular shocks. Proton foreshocks of shocks driven by stream interactions have extensions dr {<=}0.05 AU. This is smaller than foreshock extensions for ICME driven shocks. The difference in foreshock extensions is related to the fact that ICME driven shocks are formed closer to the Sun and therefore begin to accelerate particles very early in their existence, while stream interaction shocks form at {approx}1 AU and have been producing suprathermal particles for a shorter time.

  15. Specifying colours for colour vision testing using computer graphics.

    Science.gov (United States)

    Toufeeq, A

    2004-10-01

    This paper describes a novel test of colour vision using a standard personal computer, which is simple and reliable to perform. Twenty healthy individuals with normal colour vision and 10 healthy individuals with a red/green colour defect were tested binocularly at 13 selected points in the CIE (Commission International d'Eclairage, 1931) chromaticity triangle, representing the gamut of a computer monitor, where the x, y coordinates of the primary colour phosphors were known. The mean results from individuals with normal colour vision were compared to those with defective colour vision. Of the 13 points tested, five demonstrated consistently high sensitivity in detecting colour defects. The test may provide a convenient method for classifying colour vision abnormalities.

  16. Age- and stereovision-dependent eye-hand coordination deficits in children with amblyopia and abnormal binocularity.

    Science.gov (United States)

    Grant, Simon; Suttle, Catherine; Melmoth, Dean R; Conway, Miriam L; Sloper, John J

    2014-08-05

    /kinesthetic feedback from object contact at ages 7 to 9 years. However, recovery of binocularity confers increasing benefits for eye-hand coordination speed and accuracy with age, and is a better predictor of these fundamental performance measures than the degree of visual acuity loss. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  17. Robot vision for nuclear advanced robot

    International Nuclear Information System (INIS)

    Nakayama, Ryoichi; Okano, Hideharu; Kuno, Yoshinori; Miyazawa, Tatsuo; Shimada, Hideo; Okada, Satoshi; Kawamura, Astuo

    1991-01-01

    This paper describes Robot Vision and Operation System for Nuclear Advanced Robot. This Robot Vision consists of robot position detection, obstacle detection and object recognition. With these vision techniques, a mobile robot can make a path and move autonomously along the planned path. The authors implemented the above robot vision system on the 'Advanced Robot for Nuclear Power Plant' and tested in an environment mocked up as nuclear power plant facilities. Since the operation system for this robot consists of operator's console and a large stereo monitor, this system can be easily operated by one person. Experimental tests were made using the Advanced Robot (nuclear robot). Results indicate that the proposed operation system is very useful, and can be operate by only person. (author)

  18. Intact binocular function and absent ocular torsion in children with alternating skew on lateral gaze.

    Science.gov (United States)

    Hamed, L M; Maria, B L; Briscoe, S T; Shamis, D

    1996-01-01

    A form of skew deviation, called alternating skew on lateral gaze, resembles bilateral superior oblique overaction. Oblique muscle overaction has been recently speculated to result from loss of fusion with subsequent "free-wheeling" of the torsional control mechanisms of the eyes, causing sensory intorsion or extorsion with attendant superior or inferior oblique muscle overaction, respectively. We wanted to investigate whether loss of fusion plays a role in the pathogenesis of alternating skew on lateral gaze. We examined seven consecutive patients with posterior fossa tumors, enrolled in a multi-disciplinary pediatric neuro-oncology program, who displayed alternating skew on lateral gaze. All patients underwent a thorough ophthalmologic evaluation. Visual acuities in the study patients ranged from 20/20 to 20/40. Five of the seven patients were orthotropic, and showed 40 sec of arc stereopsis. Three patients showed associated downbeat nystagmus. No ocular torsion was found in any of the five patients who showed normal stereopsis upon inspection of fundus landmarks on indirect ophthalmoscopy. Patients with alternating skew on lateral gaze often have normal binocular vision and stereopsis, and lack ocular intorsion so typical of superior oblique overaction. Alternating skew on lateral gaze is neurologically mediated, with no role for defective fusion in its pathogenesis.

  19. Living with vision loss

    Science.gov (United States)

    Diabetes - vision loss; Retinopathy - vision loss; Low vision; Blindness - vision loss ... Low vision is a visual disability. Wearing regular glasses or contacts does not help. People with low vision have ...

  20. Multi-UAV joint target recognizing based on binocular vision theory

    Directory of Open Access Journals (Sweden)

    Yuan Zhang

    2017-01-01

    Full Text Available Target recognizing of unmanned aerial vehicle (UAV based on image processing take the advantage of 2D information containing in the image for identifying the target. Compare to single UAV with electrical optical tracking system (EOTS, multi-UAV with EOTS is able to take a group of image focused on the suspected target from multiple view point. Benefit from matching each couple of image in this group, points set constituted by matched feature points implicates the depth of each point. Coordinate of target feature points could be computing from depth of feature points. This depth information makes up a cloud of points and reconstructed an exclusive 3D model to recognizing system. Considering the target recognizing do not require precise target model, the cloud of feature points was regrouped into n subsets and reconstructed to a semi-3D model. Casting these subsets in a Cartesian coordinate and applying these projections in convolutional neural networks (CNN respectively, the integrated output of networks is the improved result of recognizing.

  1. The future of binocular rivalry research: reaching through a window on consciousness

    NARCIS (Netherlands)

    Klink, P. Christiaan; van Wezel, Richard Jack Anton; van Ee, Raymond; Miller, Steven M.

    2013-01-01

    Binocular rivalry is often considered an experimental window on the neural processes of consciousness. We propose three distinct approaches to exploit this window. First, one may look through the window, using binocular rivalry as a passive tool to dissociate unaltered sensory input from wavering

  2. An adaptive exposure algorithm for stereo imaging and its performance in an orchard

    DEFF Research Database (Denmark)

    García, Francisco; Wulfsohn, Dvoralai; Andersen, Jens Christian

    2010-01-01

    Stereo vision is being introduced in perception systems for autonomous agricultural vehicles. When working outdoors, light conditions change continuously. The perception system should be able to continuously adapt and correct camera exposure parameters to obtain the best interpretation of the scene...... practically possible. We describe the development and testing of an algorithm to update exposure parameter camera setting of a stereoscopic camera under dynamic light conditions. Static tests using a stereo camera were carried out in an orchard to determine how 2D image histograms and the 3D reconstruction...... change with exposure. An algorithm based on an "ideal mean pixel value" in the image was developed and implemented on the perception system of an automatic tractor. The system was tested in an orchard and found to perform satisfactorily under different orchard and light conditions....

  3. The Large Binocular Telescope as an early ELT

    Science.gov (United States)

    Hill, John; Hinz, Philip; Ashby, David

    2013-12-01

    The Large Binocular Telescope (LBT) has two 8.4-m primary mirrors on a common AZ-EL mounting. The dual Gregorian optical configuration for LBT includes a pair of adaptive secondaries. The adaptive secondaries are working reliably for science observations as well as for the commissioning of new instruments. Many aspects of the LBT telescope design have been optimized for the combination of the two optical trains. The telescope structure is relatively compact and stiff with a lowest eigenfrequency near 8 Hz. A vibration measurement system of accelerometers (OVMS) has been installed to characterize the vibrations of the telescope. A first-generation of the binocular telescope control system has been deployed on-sky. Two instruments, LBTI and LINC-NIRVANA, have been built to take advantage of the 22.65-m diffraction baseline when the telescope is phased. This diffraction-limited imaging capability (beyond 20-m baseline) positions LBT as a forerunner of the new generation of extremely large telescopes (ELT). We discuss here some of the experiences ofphasing the two sides of the telescope starting in 2010. We also report some lessons learned during on-sky commissioning of the LBTI instrument.

  4. Crossmodal Semantic Constraints on Visual Perception of Binocular Rivalry

    Directory of Open Access Journals (Sweden)

    Yi-Chuan Chen

    2011-10-01

    Full Text Available Environments typically convey contextual information via several different sensory modalities. Here, we report a study designed to investigate the crossmodal semantic modulation of visual perception using the binocular rivalry paradigm. The participants viewed a dichoptic figure consisting of a bird and a car presented to each eye, while also listening to either a bird singing or car engine revving. Participants' dominant percepts were modulated by the presentation of a soundtrack associated with either bird or car, as compared to the presentation of a soundtrack irrelevant to both visual figures (tableware clattering together in a restaurant. No such crossmodal semantic effect was observed when the participants maintained an abstract semantic cue in memory. We then further demonstrate that crossmodal semantic modulation can be dissociated from the effects of high-level attentional control over the dichoptic figures and of low-level luminance contrast of the figures. In sum, we demonstrate a novel crossmodal effect in terms of crossmodal semantic congruency on binocular rivalry. This effect can be considered a perceptual grouping or contextual constraint on human visual awareness through mid-level crossmodal excitatory connections embedded in the multisensory semantic network.

  5. Can binocular rivalry reveal neural correlates of consciousness?

    Science.gov (United States)

    Blake, Randolph; Brascamp, Jan; Heeger, David J

    2014-05-05

    This essay critically examines the extent to which binocular rivalry can provide important clues about the neural correlates of conscious visual perception. Our ideas are presented within the framework of four questions about the use of rivalry for this purpose: (i) what constitutes an adequate comparison condition for gauging rivalry's impact on awareness, (ii) how can one distinguish abolished awareness from inattention, (iii) when one obtains unequivocal evidence for a causal link between a fluctuating measure of neural activity and fluctuating perceptual states during rivalry, will it generalize to other stimulus conditions and perceptual phenomena and (iv) does such evidence necessarily indicate that this neural activity constitutes a neural correlate of consciousness? While arriving at sceptical answers to these four questions, the essay nonetheless offers some ideas about how a more nuanced utilization of binocular rivalry may still provide fundamental insights about neural dynamics, and glimpses of at least some of the ingredients comprising neural correlates of consciousness, including those involved in perceptual decision-making.

  6. Vision screening at two years does not reduce the prevalence of reduced vision at four and a half years of age.

    Science.gov (United States)

    Goodman, Lucy; Chakraborty, Arijit; Paudel, Nabin; Yu, Tzu-Ying; Jacobs, Robert J; Harding, Jane E; Thompson, Benjamin; Anstice, Nicola S

    2017-11-28

    There is currently insufficient evidence to recommend vision screening for children two years of age on habitual visual acuity at 4.5 years of age. Children born at risk of neonatal hypoglycaemia (n = 477) underwent vision assessment at 54 ± 2 months of age including measurement of monocular and binocular habitual visual acuity, assessment of binocularity and stereopsis. Of these children, 355 (74.4 per cent) had also received vision screening at two years of age (mean age = 24± 1 months), while 122 were not screened. Eighty (16.8 per cent) children were classified as having reduced vision at 4.5 years of age, but the prevalence of reduced vision did not differ between children who had previously been screened at two years of age and those who had not (15.5 per cent versus 20.5 per cent, p = 0.153). However, children with reduced vision at 4.5 years of age were more likely to have had visual abnormalities requiring referral detected at two years of age (p = 0.02). Visual acuity and mean spherical equivalent autorefraction measurements were also worse (higher values) in two-year-old children who were later classified with reduced habitual visual acuity (p = 0.031 and p = 0.001, respectively). Nevertheless, unaided binocular visual acuity, non-cycloplegic refractive error, and stereopsis at two years all showed poor sensitivity and specificity for predicting visual outcomes at 4.5 years of age. Our findings do not support the adoption of early vision screening in children as current vision tests suitable for use with two-year-old children have poor sensitivity for predicting mild-moderate habitual vision impairment at 4.5 years of age. © 2017 Optometry Australia.

  7. The potential risk of personal stereo players

    DEFF Research Database (Denmark)

    Hammershøi, Dorte; Ordoñez, Rodrigo Pizarro; Reuter, Karen

    2010-01-01

    The technological development within personal stereo systems, such as MP3 players, e. g. iPods, has changed music listening habits from home entertainment to everyday and everywhere use. The technology has developed considerably, since the introduction of cassette players and CD walkmen. High......-level low-distortion music is produced by minimal devices which can play for long periods. In this paper, the existing literature on effects of personal stereo systems is reviewed, incl. studies of exposure levels, and effects on hearing. Generally, it is found that the levels being used are of concern......, which in one study is demonstrated to relate to the specific use in situations with high levels of background noise. Another study demonstrates that the effect of using personal stereo is comparable to that of being exposed to noise in industry. The results are discussed in view of the measurement...

  8. The potential risk of personal stereo players

    DEFF Research Database (Denmark)

    Hammershøi, Dorte; Ordoñez, Rodrigo Pizarro; Reuter, Karen

    2010-01-01

    The technological development within personal stereo systems,such as MP3 players, e. g. iPods, has changed music listening habits from home entertainment to everyday and everywhere use. The technology has developed considerably, since the introduction of cassette players and CD walkmen. High......-level low-distortion music is produced by minimal devices which can play for long periods. In this paper, the existing literature on effects of personal stereo systems is reviewed, incl. studies of exposure levels, and effects on hearing. Generally, it is found that the levels being used are of concern......, which in one study is demonstrated to relate to the specific use in situations with high levels of background noise. Another study demonstrates that the effect of using personal stereo is comparable to that of being exposed to noise in industry. The results are discussed in view of the measurement...

  9. Evaluation of binocular function among pre- and early-presbyopes with asthenopia

    Directory of Open Access Journals (Sweden)

    Reindel W

    2018-01-01

    Full Text Available William Reindel,1 Lening Zhang,1 Joseph Chinn,2 Marjorie Rah1 1Vision Care, Bausch & Lomb Inc, Rochester, NY, 2J Chinn LLC, Lafayette, CO, USA Purpose: Individuals approaching presbyopia may exhibit ocular symptoms as they contend with visual demands of near work, coupled with natural age-related changes in accommodation. Therefore, accommodation and vergence of 30- to 40-year-old, myopic, soft contact lens wearing subjects with symptoms of asthenopia and no history of using multifocal lenses were evaluated.Patients and methods: In this prospective, observational study, 253 subjects with asthenopia were evaluated by 25 qualified practitioners, each at a different clinical site. Subjects were 30–40 years in age, had symptoms of soreness, eyestrain, tired eyes, or headaches with near work, regularly performed 2–3 consecutive hours of near work, and were undiagnosed with presbyopia. Amplitude of accommodation (AC and near point convergence (NPC were measured with a Royal Air Force binocular gauge. Triplicate push up and push down AC and NPC measures were recorded, and average AC values were compared to those calculated using the Hofstetter formula (HF. Results: The average AC push up/push down value was significantly better than the HF prediction for this age range (8.04±3.09 vs 6.23±0.80 D, although 22.5% of subjects had mean AC below their HF value (5.36±0.99 D. The average NPC push up/push down value was 12.0±4.69 cm. The mean binocular AC value using the push up measure was significantly better than the push down measure (8.5±3.4 vs 7.6±3.0 D. The mean NPC value using the push up measure was significantly worse than the push down measure (13.0±5.0 vs 11.0±4.7 cm. The most frequent primary diagnosis was ill-sustained accommodation (54%, followed by accommodative insufficiency (18%, and accommodative infacility (12%. Conclusion: Based upon a standardized assessment of accommodation and vergence, ill-sustained accommodation was the

  10. Visual memory for objects following foveal vision loss.

    Science.gov (United States)

    Geringswald, Franziska; Herbik, Anne; Hofmüller, Wolfram; Hoffmann, Michael B; Pollmann, Stefan

    2015-09-01

    Allocation of visual attention is crucial for encoding items into visual long-term memory. In free vision, attention is closely linked to the center of gaze, raising the question whether foveal vision loss entails suboptimal deployment of attention and subsequent impairment of object encoding. To investigate this question, we examined visual long-term memory for objects in patients suffering from foveal vision loss due to age-related macular degeneration. We measured patients' change detection sensitivity after a period of free scene exploration monocularly with their worse eye when possible, and under binocular vision, comparing sensitivity and eye movements to matched normal-sighted controls. A highly salient cue was used to capture attention to a nontarget location before a target change occurred in half of the trials, ensuring that change detection relied on memory. Patients' monocular and binocular sensitivity to object change was comparable to controls, even after more than 4 intervening fixations, and not significantly correlated with visual impairment. We conclude that extrafoveal vision suffices for efficient encoding into visual long-term memory. (c) 2015 APA, all rights reserved).

  11. Real-time vision systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  12. Micro Vision

    OpenAIRE

    Ohba, Kohtaro; Ohara, Kenichi

    2007-01-01

    In the field of the micro vision, there are few researches compared with macro environment. However, applying to the study result for macro computer vision technique, you can measure and observe the micro environment. Moreover, based on the effects of micro environment, it is possible to discovery the new theories and new techniques.

  13. Predicting Vision-Related Disability in Glaucoma.

    Science.gov (United States)

    Abe, Ricardo Y; Diniz-Filho, Alberto; Costa, Vital P; Wu, Zhichao; Medeiros, Felipe A

    2018-01-01

    To present a new methodology for investigating predictive factors associated with development of vision-related disability in glaucoma. Prospective, observational cohort study. Two hundred thirty-six patients with glaucoma followed up for an average of 4.3±1.5 years. Vision-related disability was assessed by the 25-item National Eye Institute Visual Function Questionnaire (NEI VFQ-25) at baseline and at the end of follow-up. A latent transition analysis model was used to categorize NEI VFQ-25 results and to estimate the probability of developing vision-related disability during follow-up. Patients were tested with standard automated perimetry (SAP) at 6-month intervals, and evaluation of rates of visual field change was performed using mean sensitivity (MS) of the integrated binocular visual field. Baseline disease severity, rate of visual field loss, and duration of follow-up were investigated as predictive factors for development of disability during follow-up. The relationship between baseline and rates of visual field deterioration and the probability of vision-related disability developing during follow-up. At baseline, 67 of 236 (28%) glaucoma patients were classified as disabled based on NEI VFQ-25 results, whereas 169 (72%) were classified as nondisabled. Patients classified as nondisabled at baseline had 14.2% probability of disability developing during follow-up. Rates of visual field loss as estimated by integrated binocular MS were almost 4 times faster for those in whom disability developed versus those in whom it did not (-0.78±1.00 dB/year vs. -0.20±0.47 dB/year, respectively; P disability developing over time (odds ratio [OR], 1.34; 95% confidence interval [CI], 1.06-1.70; P = 0.013). In addition, each 0.5-dB/year faster rate of loss of binocular MS during follow-up was associated with a more than 3.5 times increase in the risk of disability developing (OR, 3.58; 95% CI, 1.56-8.23; P = 0.003). A new methodology for classification and analysis

  14. Dynamic Programming and Graph Algorithms in Computer Vision*

    Science.gov (United States)

    Felzenszwalb, Pedro F.; Zabih, Ramin

    2013-01-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting, since by carefully exploiting problem structure they often provide non-trivial guarantees concerning solution quality. In this paper we briefly review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo; the mid-level problem of interactive object segmentation; and the high-level problem of model-based recognition. PMID:20660950

  15. Handbook of 3D machine vision optical metrology and imaging

    CERN Document Server

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  16. Dynamic programming and graph algorithms in computer vision.

    Science.gov (United States)

    Felzenszwalb, Pedro F; Zabih, Ramin

    2011-04-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting since, by carefully exploiting problem structure, they often provide nontrivial guarantees concerning solution quality. In this paper, we review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo, the mid-level problem of interactive object segmentation, and the high-level problem of model-based recognition.

  17. AI And Early Vision - Part II

    Science.gov (United States)

    Julesz, Bela

    1989-08-01

    A quarter of a century ago I introduced two paradigms into psychology which in the intervening years have had a direct impact on the psychobiology of early vision and an indirect one on artificial intelligence (AI or machine vision). The first, the computer-generated random-dot stereogram (RDS) paradigm (Julesz, 1960) at its very inception posed a strategic question both for AI and neurophysiology. The finding that stereoscopic depth perception (stereopsis) is possible without the many enigmatic cues of monocular form recognition - as assumed previously - demonstrated that stereopsis with its basic problem of finding matches between corresponding random aggregates of dots in the left and right visual fields became ripe for modeling. Indeed, the binocular matching problem of stereopsis opened up an entire field of study, eventually leading to the computational models of David Marr (1982) and his coworkers. The fusion of RDS had an even greater impact on neurophysiologists - including Hubel and Wiesel (1962) - who realized that stereopsis must occur at an early stage, and can be studied easier than form perception. This insight recently culminated in the studies by Gian Poggio (1984) who found binocular-disparity - tuned neurons in the input stage to the visual cortex (layer IVB in V1) in the monkey that were selectively triggered by dynamic RDS. Thus the first paradigm led to a strategic insight: that with stereoscopic vision there is no camouflage, and as such was advantageous for our primate ancestors to evolve the cortical machinery of stereoscopic vision to capture camouflaged prey (insects) at a standstill. Amazingly, although stereopsis evolved relatively late in primates, it captured the very input stages of the visual cortex. (For a detailed review, see Julesz, 1986a)

  18. Stereoselectivity in metallocene-catalyzed coordination polymerization of renewable methylene butyrolactones: From stereo-random to stereo-perfect polymers

    KAUST Repository

    Chen, Xia

    2012-05-02

    Coordination polymerization of renewable α-methylene-γ-(methyl) butyrolactones by chiral C 2-symmetric zirconocene catalysts produces stereo-random, highly stereo-regular, or perfectly stereo-regular polymers, depending on the monomer and catalyst structures. Computational studies yield a fundamental understanding of the stereocontrol mechanism governing these new polymerization reactions mediated by chiral metallocenium catalysts. © 2012 American Chemical Society.

  19. Stereoselectivity in metallocene-catalyzed coordination polymerization of renewable methylene butyrolactones: from stereo-random to stereo-perfect polymers.

    Science.gov (United States)

    Chen, Xia; Caporaso, Lucia; Cavallo, Luigi; Chen, Eugene Y-X

    2012-05-02

    Coordination polymerization of renewable α-methylene-γ-(methyl)butyrolactones by chiral C(2)-symmetric zirconocene catalysts produces stereo-random, highly stereo-regular, or perfectly stereo-regular polymers, depending on the monomer and catalyst structures. Computational studies yield a fundamental understanding of the stereocontrol mechanism governing these new polymerization reactions mediated by chiral metallocenium catalysts. © 2012 American Chemical Society

  20. Magnitude, precision, and realism of depth perception in stereoscopic vision.

    Science.gov (United States)

    Hibbard, Paul B; Haines, Alice E; Hornsey, Rebecca L

    2017-01-01

    Our perception of depth is substantially enhanced by the fact that we have binocular vision. This provides us with more precise and accurate estimates of depth and an improved qualitative appreciation of the three-dimensional (3D) shapes and positions of objects. We assessed the link between these quantitative and qualitative aspects of 3D vision. Specifically, we wished to determine whether the realism of apparent depth from binocular cues is associated with the magnitude or precision of perceived depth and the degree of binocular fusion. We presented participants with stereograms containing randomly positioned circles and measured how the magnitude, realism, and precision of depth perception varied with the size of the disparities presented. We found that as the size of the disparity increased, the magnitude of perceived depth increased, while the precision with which observers could make depth discrimination judgments decreased. Beyond an initial increase, depth realism decreased with increasing disparity magnitude. This decrease occurred well below the disparity limit required to ensure comfortable viewing.

  1. Head Pose Estimation from Passive Stereo Images

    DEFF Research Database (Denmark)

    Breitenstein, Michael D.; Jensen, Jeppe; Høilund, Carsten

    2009-01-01

    function. Our algorithm incorporates 2D and 3D cues to make the system robust to low-quality range images acquired by passive stereo systems. It handles large pose variations (of ±90 ° yaw and ±45 ° pitch rotation) and facial variations due to expressions or accessories. For a maximally allowed error of 30...

  2. Artistic Stereo Imaging by Edge Preserving Smoothing

    NARCIS (Netherlands)

    Papari, Giuseppe; Campisi, Patrizio; Callet, Patrick Le; Petkov, Nicolai

    2009-01-01

    Stereo imaging is an important area of image and video processing, with exploding progress in the last decades. An open issue in this field is the understanding of the conditions under which the straightforward application of a given image processing operator to both the left and right image of a

  3. Analysis of Disparity Error for Stereo Autofocus.

    Science.gov (United States)

    Yang, Cheng-Chieh; Huang, Shao-Kang; Shih, Kuang-Tsu; Chen, Homer H

    2018-04-01

    As more and more stereo cameras are installed on electronic devices, we are motivated to investigate how to leverage disparity information for autofocus. The main challenge is that stereo images captured for disparity estimation are subject to defocus blur unless the lenses of the stereo cameras are at the in-focus position. Therefore, it is important to investigate how the presence of defocus blur would affect stereo matching and, in turn, the performance of disparity estimation. In this paper, we give an analytical treatment of this fundamental issue of disparity-based autofocus by examining the relation between image sharpness and disparity error. A statistical approach that treats the disparity estimate as a random variable is developed. Our analysis provides a theoretical backbone for the empirical observation that, regardless of the initial lens position, disparity-based autofocus can bring the lens to the hill zone of the focus profile in one movement. The insight gained from the analysis is useful for the implementation of an autofocus system.

  4. Local Stereo Matching Using Adaptive Local Segmentation

    NARCIS (Netherlands)

    Damjanovic, S.; van der Heijden, Ferdinand; Spreeuwers, Lieuwe Jan

    We propose a new dense local stereo matching framework for gray-level images based on an adaptive local segmentation using a dynamic threshold. We define a new validity domain of the fronto-parallel assumption based on the local intensity variations in the 4-neighborhood of the matching pixel. The

  5. Computational vision

    Science.gov (United States)

    Barrow, H. G.; Tenenbaum, J. M.

    1981-01-01

    The range of fundamental computational principles underlying human vision that equally apply to artificial and natural systems is surveyed. There emerges from research a view of the structuring of vision systems as a sequence of levels of representation, with the initial levels being primarily iconic (edges, regions, gradients) and the highest symbolic (surfaces, objects, scenes). Intermediate levels are constrained by information made available by preceding levels and information required by subsequent levels. In particular, it appears that physical and three-dimensional surface characteristics provide a critical transition from iconic to symbolic representations. A plausible vision system design incorporating these principles is outlined, and its key computational processes are elaborated.

  6. Natural Tendency towards Beauty in Humans: Evidence from Binocular Rivalry.

    Science.gov (United States)

    Mo, Ce; Xia, Tiansheng; Qin, Kaixin; Mo, Lei

    2016-01-01

    Although human preference for beauty is common and compelling in daily life, it remains unknown whether such preference is essentially subserved by social cognitive demands or natural tendency towards beauty encoded in the human mind intrinsically. Here we demonstrate experimentally that humans automatically exhibit preference for visual and moral beauty without explicit cognitive efforts. Using a binocular rivalry paradigm, we identified enhanced gender-independent perceptual dominance for physically attractive persons, and the results suggested universal preference for visual beauty based on perceivable forms. Moreover, we also identified perceptual dominance enhancement for characters associated with virtuous descriptions after controlling for facial attractiveness and vigilance-related attention effects, which suggested a similar implicit preference for moral beauty conveyed in prosocial behaviours. Our findings show that behavioural preference for beauty is driven by an inherent natural tendency towards beauty in humans rather than explicit social cognitive processes.

  7. Telemetry correlation and visualization at the Large Binocular Telescope Observatory

    Science.gov (United States)

    Summers, Kellee R.; Summers, Douglas M.; Biddick, Christopher; Hooper, Stephen

    2016-08-01

    To achieve highly efficient observatory operations requires continuous evaluation and improvement of facility and instrumentation metrics. High quality metrics requires a foundation of robust and complete observatory telemetry. At the Large Binocular Telescope Observatory (LBTO), a variety of telemetry-capturing mechanisms exist, but few tools have thus far been created to facilitate studies of the data. In an effort to make all observatory telemetry data easy to use and broadly available, we have developed a suite of tools using in-house development and open source applications. This paper will explore our strategies for consolidating, parameterizing, and correlating any LBTO telemetry data to achieve easily available, web-based two- and three-dimensional time series data visualization.

  8. Stereo-particle image velocimetry uncertainty quantification

    Science.gov (United States)

    Bhattacharya, Sayantan; Charonko, John J.; Vlachos, Pavlos P.

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  9. Stereo-particle image velocimetry uncertainty quantification

    International Nuclear Information System (INIS)

    Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  10. Binocular diplopia in a tertiary hospital: Aetiology, diagnosis and treatment.

    Science.gov (United States)

    Merino, P; Fuentes, D; Gómez de Liaño, P; Ordóñez, M A

    2017-12-01

    To study the causes, diagnosis and treatment in a case series of binocular diplopia. A retrospective chart review was performed on patients seen in the Diplopia Unit of a tertiary centre during a one-year period. Diplopia was classified as: acute≤1 month since onset; subacute (1-6 months); and chronic (>6 months). Resolution of diplopia was classified as: spontaneous if it disappeared without treatment, partial if the course was intermittent, and non-spontaneous if treatment was required. It was considered a good outcome when diplopia disappeared completely (with or without treatment), or when diplopia was intermittent without significantly affecting the quality of life. A total of 60 cases were included. The mean age was 58.65 years (60% female). An acute or subacute presentation was observed in 60% of the patients. The mean onset of diplopia was 82.97 weeks. The most frequent aetiology was ischaemic (45%). The most frequent diagnosis was sixth nerve palsy (38.3%), followed by decompensated strabismus (30%). Neuroimaging showed structural lesions in 17.7% of the patients. There was a spontaneous resolution in 28.3% of the cases, and there was a good outcome with disappearance of the diplopia in 53.3% at the end of the study. The most frequent causes of binocular diplopia were cranial nerve palsies, especially the sixth cranial nerve, followed by decompensated strabismus. Structural lesions in imaging tests were more than expected. Only one third of patients had a spontaneous resolution, and half of them did not have a good outcome despite of treatment. Copyright © 2017 Sociedad Española de Oftalmología. Publicado por Elsevier España, S.L.U. All rights reserved.

  11. A computational model for dynamic vision

    Science.gov (United States)

    Moezzi, Saied; Weymouth, Terry E.

    1990-01-01

    This paper describes a novel computational model for dynamic vision which promises to be both powerful and robust. Furthermore the paradigm is ideal for an active vision system where camera vergence changes dynamically. Its basis is the retinotopically indexed object-centered encoding of the early visual information. Specifically, the relative distances of objects to a set of referents is encoded in image registered maps. To illustrate the efficacy of the method, it is applied to the problem of dynamic stereo vision. Integration of depth information over multiple frames obtained by a moving robot generally requires precise information about the relative camera position from frame to frame. Usually, this information can only be approximated. The method facilitates the integration of depth information without direct use or knowledge of camera motion.

  12. Photometric stereo sensor for robot-assisted industrial quality inspection of coated composite material surfaces

    Science.gov (United States)

    Weigl, Eva; Zambal, Sebastian; Stöger, Matthias; Eitzinger, Christian

    2015-04-01

    While composite materials are increasingly used in modern industry, the quality control in terms of vision-based surface inspection remains a challenging task. Due to the often complex and three-dimensional structures, a manual inspection of these components is nearly impossible. We present a photometric stereo sensor system including an industrial robotic arm for positioning the sensor relative to the inspected part. Two approaches are discussed: stop-and-go positioning and continuous positioning. Results are presented on typical defects that appear on various composite material surfaces in the production process.

  13. Bionic Vision-Based Intelligent Power Line Inspection System.

    Science.gov (United States)

    Li, Qingwu; Ma, Yunpeng; He, Feijia; Xi, Shuya; Xu, Jinxin

    2017-01-01

    Detecting the threats of the external obstacles to the power lines can ensure the stability of the power system. Inspired by the attention mechanism and binocular vision of human visual system, an intelligent power line inspection system is presented in this paper. Human visual attention mechanism in this intelligent inspection system is used to detect and track power lines in image sequences according to the shape information of power lines, and the binocular visual model is used to calculate the 3D coordinate information of obstacles and power lines. In order to improve the real time and accuracy of the system, we propose a new matching strategy based on the traditional SURF algorithm. The experimental results show that the system is able to accurately locate the position of the obstacles around power lines automatically, and the designed power line inspection system is effective in complex backgrounds, and there are no missing detection instances under different conditions.

  14. Binocular neurons in parastriate cortex: interocular 'matching' of receptive field properties, eye dominance and strength of silent suppression.

    Directory of Open Access Journals (Sweden)

    Phillip A Romo

    Full Text Available Spike-responses of single binocular neurons were recorded from a distinct part of primary visual cortex, the parastriate cortex (cytoarchitectonic area 18 of anaesthetized and immobilized domestic cats. Functional identification of neurons was based on the ratios of phase-variant (F1 component to the mean firing rate (F0 of their spike-responses to optimized (orientation, direction, spatial and temporal frequencies and size sine-wave-luminance-modulated drifting grating patches presented separately via each eye. In over 95% of neurons, the interocular differences in the phase-sensitivities (differences in F1/F0 spike-response ratios were small (≤ 0.3 and in over 80% of neurons, the interocular differences in preferred orientations were ≤ 10°. The interocular correlations of the direction selectivity indices and optimal spatial frequencies, like those of the phase sensitivies and optimal orientations, were also strong (coefficients of correlation r ≥ 0.7005. By contrast, the interocular correlations of the optimal temporal frequencies, the diameters of summation areas of the excitatory responses and suppression indices were weak (coefficients of correlation r ≤ 0.4585. In cells with high eye dominance indices (HEDI cells, the mean magnitudes of suppressions evoked by stimulation of silent, extra-classical receptive fields via the non-dominant eyes, were significantly greater than those when the stimuli were presented via the dominant eyes. We argue that the well documented 'eye-origin specific' segregation of the lateral geniculate inputs underpinning distinct eye dominance columns in primary visual cortices of mammals with frontally positioned eyes (distinct eye dominance columns, combined with significant interocular differences in the strength of silent suppressive fields, putatively contribute to binocular stereoscopic vision.

  15. Binocular Neurons in Parastriate Cortex: Interocular ‘Matching’ of Receptive Field Properties, Eye Dominance and Strength of Silent Suppression

    Science.gov (United States)

    Wang, Chun; Dreher, Bogdan

    2014-01-01

    Spike-responses of single binocular neurons were recorded from a distinct part of primary visual cortex, the parastriate cortex (cytoarchitectonic area 18) of anaesthetized and immobilized domestic cats. Functional identification of neurons was based on the ratios of phase-variant (F1) component to the mean firing rate (F0) of their spike-responses to optimized (orientation, direction, spatial and temporal frequencies and size) sine-wave-luminance-modulated drifting grating patches presented separately via each eye. In over 95% of neurons, the interocular differences in the phase-sensitivities (differences in F1/F0 spike-response ratios) were small (≤0.3) and in over 80% of neurons, the interocular differences in preferred orientations were ≤10°. The interocular correlations of the direction selectivity indices and optimal spatial frequencies, like those of the phase sensitivies and optimal orientations, were also strong (coefficients of correlation r ≥0.7005). By contrast, the interocular correlations of the optimal temporal frequencies, the diameters of summation areas of the excitatory responses and suppression indices were weak (coefficients of correlation r ≤0.4585). In cells with high eye dominance indices (HEDI cells), the mean magnitudes of suppressions evoked by stimulation of silent, extra-classical receptive fields via the non-dominant eyes, were significantly greater than those when the stimuli were presented via the dominant eyes. We argue that the well documented ‘eye-origin specific’ segregation of the lateral geniculate inputs underpinning distinct eye dominance columns in primary visual cortices of mammals with frontally positioned eyes (distinct eye dominance columns), combined with significant interocular differences in the strength of silent suppressive fields, putatively contribute to binocular stereoscopic vision. PMID:24927276

  16. A two-level real-time vision machine combining coarse and fine grained parallelism

    DEFF Research Database (Denmark)

    Jensen, Lars Baunegaard With; Kjær-Nielsen, Anders; Pauwels, Karl

    2010-01-01

    In this paper, we describe a real-time vision machine having a stereo camera as input generating visual information on two different levels of abstraction. The system provides visual low-level and mid-level information in terms of dense stereo and optical flow, egomotion, indicating areas...... a factor 90 and a reduction of latency of a factor 26 compared to processing on a single CPU--core. Since the vision machine provides generic visual information it can be used in many contexts. Currently it is used in a driver assistance context as well as in two robotic applications....

  17. Influence of retinal image shifts and extra-retinal eye movement signals on binocular rivalry alternations

    NARCIS (Netherlands)

    Kalisvaart, J.P.; Goossens, J.

    2013-01-01

    Previous studies have indicated that saccadic eye movements correlate positively with perceptual alternations in binocular rivalry, presumably because the foveal image changes resulting from saccades, rather than the eye movement themselves, cause switches in awareness. Recently, however, we found

  18. Merged Shape from Shading and Shape from Stereo for Planetary Topographic Mapping

    Science.gov (United States)

    Tyler, Laurence; Cook, Tony; Barnes, Dave; Parr, Gerhard; Kirk, Randolph

    2014-05-01

    Digital Elevation Models (DEMs) of the Moon and Mars have traditionally been produced from stereo imagery from orbit, or from the surface landers or rovers. One core component of image-based DEM generation is stereo matching to find correspondences between images taken from different viewpoints. Stereo matchers that rely mostly on textural features in the images can fail to find enough matched points in areas lacking in contrast or surface texture. This can lead to blank or topographically noisy areas in resulting DEMs. Fine depth detail may also be lacking due to limited precision and quantisation of the pixel matching process. Shape from shading (SFS), a two dimensional version of photoclinometry, utilizes the properties of light reflecting off surfaces to build up localised slope maps, which can subsequently be combined to extract topography. This works especially well on homogeneous surfaces and can recover fine detail. However the cartographic accuracy can be affected by changes in brightness due to differences in surface material, albedo and light scattering properties, and also by the presence of shadows. We describe here experimental research for the Planetary Robotics Vision Data Exploitation EU FP7 project (PRoViDE) into using stereo generated depth maps in conjunction with SFS to recover both coarse and fine detail of planetary surface DEMs. Our Large Deformation Optimisation Shape From Shading (LDOSFS) algorithm uses image data, illumination, viewing geometry and camera parameters to produce a DEM. A stereo-derived depth map can be used as an initial seed if available. The software uses separate Bidirectional Reflectance Distribution Function (BRDF) and SFS modules for iterative processing and to make the code more portable for future development. Three BRDF models are currently implemented: Lambertian, Blinn-Phong, and Oren-Nayar. A version of the Hapke reflectance function, which is more appropriate for planetary surfaces, is under development

  19. Stereo-Based Visual Odometry for Autonomous Robot Navigation

    Directory of Open Access Journals (Sweden)

    Ioannis Kostavelis

    2016-02-01

    Full Text Available Mobile robots should possess accurate self-localization capabilities in order to be successfully deployed in their environment. A solution to this challenge may be derived from visual odometry (VO, which is responsible for estimating the robot's pose by analysing a sequence of images. The present paper proposes an accurate, computationally-efficient VO algorithm relying solely on stereo vision images as inputs. The contribution of this work is twofold. Firstly, it suggests a non-iterative outlier detection technique capable of efficiently discarding the outliers of matched features. Secondly, it introduces a hierarchical motion estimation approach that produces refinements to the global position and orientation for each successive step. Moreover, for each subordinate module of the proposed VO algorithm, custom non-iterative solutions have been adopted. The accuracy of the proposed system has been evaluated and compared with competent VO methods along DGPS-assessed benchmark routes. Experimental results of relevance to rough terrain routes, including both simulated and real outdoors data, exhibit remarkable accuracy, with positioning errors lower than 2%.

  20. Prevalence of strabismic binocular anomalies, amblyopia and anisometropia. Rehabilitation Faculty of Shahid Beheshti Medical University

    OpenAIRE

    Mohsen Akhgary; Mohammad Ghassemi-Broumand; Mohammad Aghazadeh Amiri; Mehdi Tabatabaee Seyed

    2011-01-01

    Purpose: Manifest strabismus such as constant and alternative esotropia and exotropia, not only cause cosmetic problem in patients but also induce disorders such as amblyopia. These anomalies can lead to academic failure in students and reduce efficiency in other jobs. Therefore, determining the prevalence of binocular anomalies is important. The purpose of this study is to determine the prevalence of strabismic binocular anomalies, amblyopia and anisometropia in patients examined in optometr...

  1. Problems with balance and binocular visual dysfunction are associated with post-stroke fatigue

    DEFF Research Database (Denmark)

    Schow, Trine; Teasdale, Thomas William; Jensen Quas, Kirsten

    2016-01-01

    Trine Schow, Thomas William Teasdale, Kirsten Jensen Quas& Morten Arendt Rasmussen (2016): Problems with balance and binocular visual dysfunction are associated with post-stroke fatigue, Topics in Stroke Rehabilitation, DOI: 10.1080/10749357.2016.1188475......Trine Schow, Thomas William Teasdale, Kirsten Jensen Quas& Morten Arendt Rasmussen (2016): Problems with balance and binocular visual dysfunction are associated with post-stroke fatigue, Topics in Stroke Rehabilitation, DOI: 10.1080/10749357.2016.1188475...

  2. Industrial vision

    DEFF Research Database (Denmark)

    Knudsen, Ole

    1998-01-01

    This dissertation is concerned with the introduction of vision-based application s in the ship building industry. The industrial research project is divided into a natural seq uence of developments, from basic theoretical projective image generation via CAD and subpixel analysis to a description...... is present ed, and the variability of the parameters is examined and described. The concept of using CAD together with vision information is based on the fact that all items processed at OSS have an associated complete 3D CAD model that is accessible at all production states. This concept gives numerous...... possibilities for using vision in applications which otherwise would be very difficult to automate. The requirement for low tolerances in production is, despite the huge dimensions of the items involved, extreme. This fact makes great demands on the ability to do robust sub pixel estimation. A new method based...

  3. Opportunity's Surroundings on Sol 1798 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11850 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11850 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo 180-degree view of the rover's surroundings during the 1,798th Martian day, or sol, of Opportunity's surface mission (Feb. 13, 2009). North is on top. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 111 meters (364 feet) southward on the preceding sol. Tracks from that drive recede northward in this view. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  4. Spirit Beside 'Home Plate,' Sol 1809 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11803 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11803 NASA Mars Exploration Rover Spirit used its navigation camera to take the images assembled into this stereo, 120-degree view southward after a short drive during the 1,809th Martian day, or sol, of Spirit's mission on the surface of Mars (February 3, 2009). By combining images from the left-eye and right-eye sides of the navigation camera, the view appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Spirit had driven about 2.6 meters (8.5 feet) that sol, continuing a clockwise route around a low plateau called 'Home Plate.' In this image, the rocks visible above the rovers' solar panels are on the slope at the northern edge of Home Plate. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  5. Delusion and bi-ocular vision.

    Science.gov (United States)

    De Masi, Franco

    2015-10-01

    The delusional experience is the result of a grave disjunction in the psyche whose outcome is not readily predictable. Examination of the specific mode of disjunction may help us understand the nature and radical character of delusion. I will present the therapy of a psychotic patient who after many years of analysis and progresses in his life continues to show delusional episodes although limited and contained. In his case, the two visions, one delusional and the other real, remain distinct and differentiated from each other because they both possess the same perceptual character, that of reality. He has a bi-ocular vision of reality and not a binocular one because his vision lacks integration, as would necessarily be the case if the two visions could be compared with each other. The principle of non-contradiction ceases to apply in delusion. A corollary of the failure of the principle of non-contradiction is that, if a statement and its negation are both true, then any statement is true. Logicians call this consequence the principle of explosion. For this reason, the distinction between truth, reality, improbability, probability, possibility and impossibility is lost in the delusional system, thus triggering an omnipotent, explosive mechanism with a potentially infinite progression. The paper presents some thoughts for a possible analytic transformation of the delusional experience. Copyright © 2015 Institute of Psychoanalysis.

  6. Tratamiento binocular de la ambliopía basado en la realidad virtual

    Directory of Open Access Journals (Sweden)

    Yanet Cristina Díaz Núñez

    Full Text Available Aunque los tratamientos predominantes de la ambliopía son monoculares, estos tienen poca aceptación y baja efectividad en el restablecimiento de la combinación binocular. Numerosas evidencias apoyan la idea de que la ambliopía es en esencia un problema binocular y que la supresión juega un papel clave. En esta revisión se exponen dos estrategias para el tratamiento binocular de la ambliopía basado en la realidad virtual; la primera con el objetivo primario de mejorar la agudeza visual y la segunda con el propósito de mejorar las funciones binoculares a través de la reducción de la supresión. Este enfoque binocular expone al paciente a condiciones artificiales de visión con estímulos dicópticos en imágenes relacionadas. Los estudios clínicos realizados, tanto en niños como adultos, reportan mejorías de la agudeza visual y la estereopsia en un tiempo muy inferior al requerido por la oclusión. Los resultados clínicos sugieren que un enfoque binocular que combine ambas estrategias puede utilizarse como complemento de los tratamientos clásicos y como alternativa en adultos y niños con historial de tratamientos fracasados o rechazados.

  7. Absence of binocular summation, eye dominance, and learning effects in color discrimination.

    Science.gov (United States)

    Costa, Marcelo Fernandes; Ventura, Dora Fix; Perazzolo, Felipe; Murakoshi, Marcio; Silveira, Luiz Carlos de Lima

    2006-01-01

    We evaluated binocular summation, eye dominance, and learning in the Trivector and Ellipses procedures of the Cambridge Colour Test (CCT). Subjects (n = 36, 18-30 years old) were recruited among students and staff from the University of São Paulo. Inclusion criteria were absence of ophthalmological complaints and best-corrected Snellen VA 20/20 or better. The subjects were tested in three randomly selected eye conditions: binocular, monocular dominant eye, and nondominant eye. Results obtained in the binocular and monocular conditions did not differ statistically for thresholds measured along the protan, deutan, and tritan confusion axes (ANOVA, P > 0.05). No statistical difference was detected among discrimination ellipses obtained in binocular or monocular conditions (ANOVA, P > 0.05), suggesting absence of binocular summation or of an effect of eye dominance. Possible effects of learning were examined by comparing successive thresholds obtained in the three testing conditions. There was no evidence of improvement as a function of testing order (ANCOVA, P > 0.05). We conclude that CCT thresholds are not affected by binocularity, eye dominance, or learning. Our results differ from those found by Verriest et al. (1982) using the Farnsworth-Munsell 100 Hue test and Hovis et al. (2004) using the Farnsworth-Munsell panel D-15 test.

  8. INVESTIGATION OF 1 : 1,000 SCALE MAP GENERATION BY STEREO PLOTTING USING UAV IMAGES

    Directory of Open Access Journals (Sweden)

    S. Rhee

    2017-08-01

    Full Text Available Large scale maps and image mosaics are representative geospatial data that can be extracted from UAV images. Map drawing using UAV images can be performed either by creating orthoimages and digitizing them, or by stereo plotting. While maps generated by digitization may serve the need for geospatial data, many institutions and organizations require map drawing using stereoscopic vision on stereo plotting systems. However, there are several aspects to be checked for UAV images to be utilized for stereo plotting. The first aspect is the accuracy of exterior orientation parameters (EOPs generated through automated bundle adjustment processes. It is well known that GPS and IMU sensors mounted on a UAV are not very accurate. It is necessary to adjust initial EOPs accurately using tie points. For this purpose, we have developed a photogrammetric incremental bundle adjustment procedure. The second aspect is unstable shooting conditions compared to aerial photographing. Unstable image acquisition may bring uneven stereo coverage, which will result in accuracy loss eventually. Oblique stereo pairs will create eye fatigue. The third aspect is small coverage of UAV images. This aspect will raise efficiency issue for stereo plotting of UAV images. More importantly, this aspect will make contour generation from UAV images very difficult. This paper will discuss effects relate to these three aspects. In this study, we tried to generate 1 : 1,000 scale map from the dataset using EOPs generated from software developed in-house. We evaluated Y-disparity of the tie points extracted automatically through the photogrammetric incremental bundle adjustment process. We could confirm that stereoscopic viewing is possible. Stereoscopic plotting work was carried out by a professional photogrammetrist. In order to analyse the accuracy of the map drawing using stereoscopic vision, we compared the horizontal and vertical position difference between adjacent models after

  9. Investigation of 1 : 1,000 Scale Map Generation by Stereo Plotting Using Uav Images

    Science.gov (United States)

    Rhee, S.; Kim, T.

    2017-08-01

    Large scale maps and image mosaics are representative geospatial data that can be extracted from UAV images. Map drawing using UAV images can be performed either by creating orthoimages and digitizing them, or by stereo plotting. While maps generated by digitization may serve the need for geospatial data, many institutions and organizations require map drawing using stereoscopic vision on stereo plotting systems. However, there are several aspects to be checked for UAV images to be utilized for stereo plotting. The first aspect is the accuracy of exterior orientation parameters (EOPs) generated through automated bundle adjustment processes. It is well known that GPS and IMU sensors mounted on a UAV are not very accurate. It is necessary to adjust initial EOPs accurately using tie points. For this purpose, we have developed a photogrammetric incremental bundle adjustment procedure. The second aspect is unstable shooting conditions compared to aerial photographing. Unstable image acquisition may bring uneven stereo coverage, which will result in accuracy loss eventually. Oblique stereo pairs will create eye fatigue. The third aspect is small coverage of UAV images. This aspect will raise efficiency issue for stereo plotting of UAV images. More importantly, this aspect will make contour generation from UAV images very difficult. This paper will discuss effects relate to these three aspects. In this study, we tried to generate 1 : 1,000 scale map from the dataset using EOPs generated from software developed in-house. We evaluated Y-disparity of the tie points extracted automatically through the photogrammetric incremental bundle adjustment process. We could confirm that stereoscopic viewing is possible. Stereoscopic plotting work was carried out by a professional photogrammetrist. In order to analyse the accuracy of the map drawing using stereoscopic vision, we compared the horizontal and vertical position difference between adjacent models after drawing a specific

  10. A buyer's and user's guide to astronomical telescopes and binoculars

    CERN Document Server

    Mullaney, James

    2014-01-01

    Amateur astronomers of all skill levels are always contemplating their next telescope, and this book points the way to the most suitable instruments. Similarly, those who are buying their first telescopes – and these days not necessarily a low-cost one – will be able to compare and contrast different types and manufacturers. This revised new guide provides an extensive overview of binoculars and telescopes. It includes detailed up-to-date information on sources, selection and use of virtually every major type, brand, and model on today’s market, a truly invaluable treasure-trove of information and helpful advice for all amateur astronomers. Originally written in 2006, much of the first edition is inevitably now out of date, as equipment advances and manufacturers come and go. This second edition not only updates all the existing sections but adds two new ones: Astro-imaging and Professional-Amateur collaboration. Thanks to the rapid and amazing developments that have been made in digital cameras it is...

  11. Design of optical system for binocular fundus camera.

    Science.gov (United States)

    Wu, Jun; Lou, Shiliang; Xiao, Zhitao; Geng, Lei; Zhang, Fang; Wang, Wen; Liu, Mengjia

    2017-12-01

    A non-mydriasis optical system for binocular fundus camera has been designed in this paper. It can capture two images of the same fundus retinal region from different angles at the same time, and can be used to achieve three-dimensional reconstruction of fundus. It is composed of imaging system and illumination system. In imaging system, Gullstrand Le Grand eye model is used to simulate normal human eye, and Schematic eye model is used to test the influence of ametropia in human eye on imaging quality. Annular aperture and black dot board are added into illumination system, so that the illumination system can eliminate stray light produced by corneal-reflected light and omentoscopic lens. Simulation results show that MTF of each visual field at the cut-off frequency of 90lp/mm is greater than 0.2, system distortion value is -2.7%, field curvature is less than 0.1 mm, radius of Airy disc is 3.25um. This system has a strong ability of chromatic aberration correction and focusing, and can image clearly for human fundus in which the range of diopters is from -10 D to +6 D(1 D = 1 m -1 ).

  12. Vision Screening

    Science.gov (United States)

    ... an efficient and cost-effective method to identify children with visual impairment or eye conditions that are likely to lead ... main goal of vision screening is to identify children who have or are at ... visual impairment unless treated in early childhood. Other problems that ...

  13. Healthy Vision Tips

    Science.gov (United States)

    ... for Kids >> Healthy Vision Tips Listen All About Vision About the Eye Ask a Scientist Video Series ... Links to More Information Optical Illusions Printables Healthy Vision Tips Healthy vision starts with you! Use these ...

  14. Vector Disparity Sensor with Vergence Control for Active Vision Systems

    Science.gov (United States)

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P.; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system. PMID:22438737

  15. Rapid, high-accuracy detection of strabismus and amblyopia using the pediatric vision scanner

    OpenAIRE

    Loudon, Sjoukje; Rook, Caitlin; Nassif, Deborah; Piskun, Nadya; Hunter, David

    2011-01-01

    textabstractPurpose. The Pediatric Vision Scanner (PVS) detects strabismus by identifying ocular fixation in both eyes simultaneously. This study was undertaken to assess the ability of the PVS to identify patients with amblyopia or strabismus, particularly anisometropic amblyopia with no measurable strabismus. Methods. The PVS test, administered from 40 cm and requiring 2.5 seconds of attention, generated a binocularity score (BIN, 0%-100%). We tested 154 patients and 48 controls between the...

  16. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Gagliano, L.; Bryan, T.; MacLeod, T.

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASAs Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  17. Pancam Peek into 'Victoria Crater' (Stereo)

    Science.gov (United States)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA08776 [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA08776 A drive of about 60 meters (about 200 feet) on the 943rd Martian day, or sol, of Opportunity's exploration of Mars' Meridiani Planum region (Sept. 18, 2006) brought the NASA rover to within about 50 meters (about 160 feet) of the rim of 'Victoria Crater.' This crater has been the mission's long-term destination for the past 21 Earth months. Opportunity reached a location from which the cameras on top of the rover's mast could begin to see into the interior of Victoria. This stereo anaglyph was made from frames taken on sol 943 by the panoramic camera (Pancam) to offer a three-dimensional view when seen through red-blue glasses. It shows the upper portion of interior crater walls facing toward Opportunity from up to about 850 meters (half a mile) away. The amount of vertical relief visible at the top of the interior walls from this angle is about 15 meters (about 50 feet). The exposures were taken through a Pancam filter selecting wavelengths centered on 750 nanometers. Victoria Crater is about five times wider than 'Endurance Crater,' which Opportunity spent six months examining in 2004, and about 40 times wider than 'Eagle Crater,' where Opportunity first landed. The great lure of Victoria is the expectation that a thick stack of geological layers will be exposed in the crater walls, potentially several times the thickness that was previously studied at Endurance and therefore, potentially preserving several times the historical record.

  18. Predicting Visual Consciousness Electrophysiologically from Intermittent Binocular Rivalry

    Science.gov (United States)

    O’Shea, Robert P.; Kornmeier, Jürgen; Roeber, Urte

    2013-01-01

    Purpose We sought brain activity that predicts visual consciousness. Methods We used electroencephalography (EEG) to measure brain activity to a 1000-ms display of sine-wave gratings, oriented vertically in one eye and horizontally in the other. This display yields binocular rivalry: irregular alternations in visual consciousness between the images viewed by the eyes. We replaced both gratings with 200 ms of darkness, the gap, before showing a second display of the same rival gratings for another 1000 ms. We followed this by a 1000-ms mask then a 2000-ms inter-trial interval (ITI). Eleven participants pressed keys after the second display in numerous trials to say whether the orientation of the visible grating changed from before to after the gap or not. Each participant also responded to numerous non-rivalry trials in which the gratings had identical orientations for the two eyes and for which the orientation of both either changed physically after the gap or did not. Results We found that greater activity from lateral occipital-parietal-temporal areas about 180 ms after initial onset of rival stimuli predicted a change in visual consciousness more than 1000 ms later, on re-presentation of the rival stimuli. We also found that less activity from parietal, central, and frontal electrodes about 400 ms after initial onset of rival stimuli predicted a change in visual consciousness about 800 ms later, on re-presentation of the rival stimuli. There was no such predictive activity when the change in visual consciousness occurred because the stimuli changed physically. Conclusion We found early EEG activity that predicted later visual consciousness. Predictive activity 180 ms after onset of the first display may reflect adaption of the neurons mediating visual consciousness in our displays. Predictive activity 400 ms after onset of the first display may reflect a less-reliable brain state mediating visual consciousness. PMID:24124536

  19. Lambda Vision

    Science.gov (United States)

    Czajkowski, Michael

    2014-06-01

    There is an explosion in the quantity and quality of IMINT data being captured in Intelligence Surveillance and Reconnaissance (ISR) today. While automated exploitation techniques involving computer vision are arriving, only a few architectures can manage both the storage and bandwidth of large volumes of IMINT data and also present results to analysts quickly. Lockheed Martin Advanced Technology Laboratories (ATL) has been actively researching in the area of applying Big Data cloud computing techniques to computer vision applications. This paper presents the results of this work in adopting a Lambda Architecture to process and disseminate IMINT data using computer vision algorithms. The approach embodies an end-to-end solution by processing IMINT data from sensors to serving information products quickly to analysts, independent of the size of the data. The solution lies in dividing up the architecture into a speed layer for low-latent processing and a batch layer for higher quality answers at the expense of time, but in a robust and fault-tolerant way. This approach was evaluated using a large corpus of IMINT data collected by a C-130 Shadow Harvest sensor over Afghanistan from 2010 through 2012. The evaluation data corpus included full motion video from both narrow and wide area field-of-views. The evaluation was done on a scaled-out cloud infrastructure that is similar in composition to those found in the Intelligence Community. The paper shows experimental results to prove the scalability of the architecture and precision of its results using a computer vision algorithm designed to identify man-made objects in sparse data terrain.

  20. Cartesian visions.

    Science.gov (United States)

    Fara, Patricia

    2008-12-01

    Few original portraits exist of René Descartes, yet his theories of vision were central to Enlightenment thought. French philosophers combined his emphasis on sight with the English approach of insisting that ideas are not innate, but must be built up from experience. In particular, Denis Diderot criticised Descartes's views by describing how Nicholas Saunderson--a blind physics professor at Cambridge--relied on touch. Diderot also made Saunderson the mouthpiece for some heretical arguments against the existence of God.

  1. Multi-hypothesis distributed stereo video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Zamarin, Marco; Forchhammer, Søren

    2013-01-01

    for stereo sequences, exploiting an interpolated intra-view SI and two inter-view SIs. The quality of the SI has a major impact on the DVC Rate-Distortion (RD) performance. As the inter-view SIs individually present lower RD performance compared with the intra-view SI, we propose multi-hypothesis decoding...... for robust fusion and improved performance. Compared with a state-of-the-art single side information solution, the proposed DVC decoder improves the RD performance for all the chosen test sequences by up to 0.8 dB. The proposed multi-hypothesis decoder showed higher robustness compared with other fusion...

  2. Developing stereo image based robot control system

    Energy Technology Data Exchange (ETDEWEB)

    Suprijadi,; Pambudi, I. R.; Woran, M.; Naa, C. F; Srigutomo, W. [Department of Physics, FMIPA, InstitutTeknologi Bandung Jl. Ganesha No. 10. Bandung 40132, Indonesia supri@fi.itb.ac.id (Indonesia)

    2015-04-16

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  3. Artificial vision.

    Science.gov (United States)

    Zarbin, M; Montemagno, C; Leary, J; Ritch, R

    2011-09-01

    A number treatment options are emerging for patients with retinal degenerative disease, including gene therapy, trophic factor therapy, visual cycle inhibitors (e.g., for patients with Stargardt disease and allied conditions), and cell transplantation. A radically different approach, which will augment but not replace these options, is termed neural prosthetics ("artificial vision"). Although rewiring of inner retinal circuits and inner retinal neuronal degeneration occur in association with photoreceptor degeneration in retinitis pigmentosa (RP), it is possible to create visually useful percepts by stimulating retinal ganglion cells electrically. This fact has lead to the development of techniques to induce photosensitivity in cells that are not light sensitive normally as well as to the development of the bionic retina. Advances in artificial vision continue at a robust pace. These advances are based on the use of molecular engineering and nanotechnology to render cells light-sensitive, to target ion channels to the appropriate cell type (e.g., bipolar cell) and/or cell region (e.g., dendritic tree vs. soma), and on sophisticated image processing algorithms that take advantage of our knowledge of signal processing in the retina. Combined with advances in gene therapy, pathway-based therapy, and cell-based therapy, "artificial vision" technologies create a powerful armamentarium with which ophthalmologists will be able to treat blindness in patients who have a variety of degenerative retinal diseases.

  4. A study on approaching motion perception in periphery with binocular viewing: Visibility is increased in the absence of one eye's information

    Science.gov (United States)

    Wang, Lei; Idesawa, Masanori; Wang, Qin

    2009-07-01

    The visibility of an approaching target on the horizontal plane in peripheral vision with binocular viewing was studied. It was found that the perceptual performance for the target moving toward the middle point of the two eyes was remarkably worse; under this circumstance it has rather high performance in the absence of one eye’s target information with occlusion or falling on the blind spot. These facts imply that the conventional change disparity mechanism does not work in the peripheral visual field; while some simple combinations of the monocular information of two eyes, such as the sum of two eyes’ image motion with sign, can be used to detect an approaching motion in the periphery.

  5. Real-time loudspeaker distance estimation with stereo audio

    DEFF Research Database (Denmark)

    2017-01-01

    A method for estimating a distance between a first and a second loudspeaker characterized by playing back a first stereo source signal vector s1 on the first loudspeaker, and playing back a second stereo source signal vector s2 on the second loudspeaker, acquiring a first recorded signal vector x...

  6. LED-based Photometric Stereo: Modeling, Calibration and Numerical Solutions

    DEFF Research Database (Denmark)

    Quéau, Yvain; Durix, Bastien; Wu, Tao

    2018-01-01

    We conduct a thorough study of photometric stereo under nearby point light source illumination, from modeling to numerical solution, through calibration. In the classical formulation of photometric stereo, the luminous fluxes are assumed to be directional, which is very difficult to achieve in pr...

  7. Grasping Unknown Objects in an Early Cognitive Vision System

    DEFF Research Database (Denmark)

    Popovic, Mila

    2011-01-01

    Grasping of unknown objects presents an important and challenging part of robot manipulation. The growing area of service robotics depends upon the ability of robots to autonomously grasp and manipulate a wide range of objects in everyday environments. Simple, non task-specific grasps of unknown ...... and comparing vision-based grasping methods, and the creation of algorithms for bootstrapping a process of acquiring world understanding for artificial cognitive agents....... presents a system for robotic grasping of unknown objects us- ing stereo vision. Grasps are defined based on contour and surface information provided by the Early Cognitive Vision System, that organizes visual informa- tion into a biologically motivated hierarchical representation. The contributions...... of the thesis are: the extension of the Early Cognitive Vision representation with a new type of feature hierarchy in the texture domain, the definition and evaluation of contour based grasping methods, the definition and evaluation of surface based grasping methods, the definition of a benchmark for testing...

  8. Humanoid monocular stereo measuring system with two degrees of freedom using bionic optical imaging system

    Science.gov (United States)

    Du, Jia-Wei; Wang, Xuan-Yin; Zhu, Shi-Qiang

    2017-10-01

    Based on the process by which the spatial depth clue is obtained by a single eye, a monocular stereo vision to measure the depth information of spatial objects was proposed in this paper and a humanoid monocular stereo measuring system with two degrees of freedom was demonstrated. The proposed system can effectively obtain the three-dimensional (3-D) structure of spatial objects of different distances without changing the position of the system and has the advantages of being exquisite, smart, and flexible. The bionic optical imaging system we proposed in a previous paper, named ZJU SY-I, was employed and its vision characteristic was just like the resolution decay of the eye's vision from center to periphery. We simplified the eye's rotation in the eye socket and the coordinated rotation of other organs of the body into two rotations in the orthogonal direction and employed a rotating platform with two rotation degrees of freedom to drive ZJU SY-I. The structure of the proposed system was described in detail. The depth of a single feature point on the spatial object was deduced, as well as its spatial coordination. With the focal length adjustment of ZJU SY-I and the rotation control of the rotation platform, the spatial coordinates of all feature points on the spatial object could be obtained and then the 3-D structure of the spatial object could be reconstructed. The 3-D structure measurement experiments of two spatial objects with different distances and sizes were conducted. Some main factors affecting the measurement accuracy of the proposed system were analyzed and discussed.

  9. Opportunity's Surroundings After Sol 1820 Drive (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11841 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11841 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,820th to 1,822nd Martian days, or sols, of Opportunity's surface mission (March 7 to 9, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 20.6 meters toward the northwest on Sol 1820 before beginning to take the frames in this view. Tracks from that drive recede southwestward. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and small exposures of lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  10. Opportunity's Surroundings on Sol 1818 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11846 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11846 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,818th Martian day, or sol, of Opportunity's surface mission (March 5, 2009). South is at the center; north at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 80.3 meters (263 feet) southward earlier on that sol. Tracks from the drive recede northward in this view. The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  11. Stereo topography of Valhalla and Gilgamesh

    Science.gov (United States)

    Schenk, P.; McKinnon, W.; Moore, J.

    1997-03-01

    The geology and morphology of the large multiring impact structures Valhalla and Gilgamesh have been used to infer ways in which the interior structure and properties of the large icy satellites Callisto and Ganymede differ from rocky bodies. These earlier studies were made in the absence of topographic data showing the depths of large impact basins and the degree to which relief has been preserved at large and small scales. Using Voyager stereo images of these basins, we have constructed the first detailed topographic maps of these large basins. These maps reveal the absence of deep topographic depressions, but show that multi-kilometer relief is preserved near the center of Valhalla. Digital Elevation Models (DEM) of these basins were produced using an automated digital stereogrammetry program developed at LPI for use with Voyager and Viking images. The Voyager images used here were obtained from distances of 80,000 to 125,000 km. As a result, the formal vertical resolution for both Valhalla and Gilgamesh maps is about 0.5 km. Relative elevations only are mapped as no global topographic datum exists for the Galilean satellites. In addition, the stereo image models were used to remap the geology and structure of these multiring basins in detail.

  12. Explaining Polarization Reversals in STEREO Wave Data

    Science.gov (United States)

    Breneman, A.; Cattell, C.; Wygant, J.; Kersten, K.; Wilson, L, B., III; Dai, L.; Colpitts, C.; Kellogg, P. J.; Goetz, K.; Paradise, A.

    2012-01-01

    Recently Breneman et al. reported observations of large amplitude lightning and transmitter whistler mode waves from two STEREO passes through the inner radiation belt (Lpaper. We show, with a combination of observations and simulated wave superposition, that these polarization reversals are due to the beating of an incident electromagnetic whistler mode wave at 21.4 kHz and linearly polarized, symmetric lower hybrid sidebands Doppler-shifted from the incident wave by +/-200 Hz. The existence of the lower hybrid waves is consistent with the parametric decay mechanism of Lee and Kuo whereby an incident whistler mode wave decays into symmetric, short wavelength lower hybrid waves and a purely growing (zero-frequency) mode. Like the lower hybrid waves, the purely growing mode is Doppler-shifted by 200 Hz as observed on STEREO. This decay mechanism in the upper ionosphere has been previously reported at equatorial latitudes and is thought to have a direct connection with explosive spread F enhancements. As such it may represent another dissipation mechanism of VLF wave energy in the ionosphere and may help to explain a deficit of observed lightning and transmitter energy in the inner radiation belts as reported by Starks et al.

  13. Stereo matching using epipolar distance transform.

    Science.gov (United States)

    Yang, Qingxiong; Ahuja, Narendra

    2012-10-01

    In this paper, we propose a simple but effective image transform, called the epipolar distance transform, for matching low-texture regions. It converts image intensity values to a relative location inside a planar segment along the epipolar line, such that pixels in the low-texture regions become distinguishable. We theoretically prove that the transform is affine invariant, thus the transformed images can be directly used for stereo matching. Any existing stereo algorithms can be directly used with the transformed images to improve reconstruction accuracy for low-texture regions. Results on real indoor and outdoor images demonstrate the effectiveness of the proposed transform for matching low-texture regions, keypoint detection, and description for low-texture scenes. Our experimental results on Middlebury images also demonstrate the robustness of our transform for highly textured scenes. The proposed transform has a great advantage, its low computational complexity. It was tested on a MacBook Air laptop computer with a 1.8 GHz Core i7 processor, with a speed of about 9 frames per second for a video graphics array-sized image.

  14. The High Energy Telescope for STEREO

    Science.gov (United States)

    von Rosenvinge, T. T.; Reames, D. V.; Baker, R.; Hawk, J.; Nolan, J. T.; Ryan, L.; Shuman, S.; Wortman, K. A.; Mewaldt, R. A.; Cummings, A. C.; Cook, W. R.; Labrador, A. W.; Leske, R. A.; Wiedenbeck, M. E.

    2008-04-01

    The IMPACT investigation for the STEREO Mission includes a complement of Solar Energetic Particle instruments on each of the two STEREO spacecraft. Of these instruments, the High Energy Telescopes (HETs) provide the highest energy measurements. This paper describes the HETs in detail, including the scientific objectives, the sensors, the overall mechanical and electrical design, and the on-board software. The HETs are designed to measure the abundances and energy spectra of electrons, protons, He, and heavier nuclei up to Fe in interplanetary space. For protons and He that stop in the HET, the kinetic energy range corresponds to ˜13 to 40 MeV/n. Protons that do not stop in the telescope (referred to as penetrating protons) are measured up to ˜100 MeV/n, as are penetrating He. For stopping He, the individual isotopes 3He and 4He can be distinguished. Stopping electrons are measured in the energy range ˜0.7 6 MeV.

  15. Feasibility of remote evaporation and precipitation estimates. [by stereo images

    Science.gov (United States)

    Sadeh, W. Z.

    1974-01-01

    Remote sensing by means of stereo images obtained from flown cameras and scanners provides the potential to monitor the dynamics of pollutant mixing over large areas. Moreover, stereo technology may permit monitoring of pollutant concentration and mixing with sufficient detail to ascertain the structure of a polluted air mass. Consequently, stereo remote systems can be employed to supply data to set forth adequate regional standards on air quality. A method of remote sensing using stereo images is described. Preliminary results concerning the planar extent of a plume based on comparison with ground measurements by an alternate method, e.g., remote hot-wire anemometer technique, are supporting the feasibility of using stereo remote sensing systems.

  16. Eyesight quality and Computer Vision Syndrome.

    Science.gov (United States)

    Bogdănici, Camelia Margareta; Săndulache, Diana Elena; Nechita, Corina Andreea

    2017-01-01

    The aim of the study was to analyze the effects that gadgets have on eyesight quality. A prospective observational study was conducted from January to July 2016, on 60 people who were divided into two groups: Group 1 - 30 middle school pupils with a mean age of 11.9 ± 1.86 and Group 2 - 30 patients evaluated in the Ophthalmology Clinic, "Sf. Spiridon" Hospital, Iași, with a mean age of 21.36 ± 7.16 years. The clinical parameters observed were the following: visual acuity (VA), objective refraction, binocular vision (BV), fusional amplitude (FA), Schirmer's test. A questionnaire was also distributed, which contained 8 questions that highlighted the gadget's impact on the eyesight. The use of different gadgets, such as computer, laptops, mobile phones or other displays become part of our everyday life and people experience a variety of ocular symptoms or vision problems related to these. Computer Vision Syndrome (CVS) represents a group of visual and extraocular symptoms associated with sustained use of visual display terminals. Headache, blurred vision, and ocular congestion are the most frequent manifestations determined by the long time use of gadgets. Mobile phones and laptops are the most frequently used gadgets. People who use gadgets for a long time have a sustained effort for accommodation. A small amount of refractive errors (especially myopic shift) was objectively recorded by various studies on near work. Dry eye syndrome could also be identified, and an improvement of visual comfort could be observed after the instillation of artificial tears drops. Computer Vision Syndrome is still under-diagnosed, and people should be made aware of the bad effects the prolonged use of gadgets has on eyesight.

  17. Embodied Visions

    DEFF Research Database (Denmark)

    Grodal, Torben Kragh

    melodramas - from evolutionary and psychological perspectives, the author also reflects on social issues at the intersection of film theory and neuropsychology. These include moral problems in film viewing, ow we experience realism and character identification, and the value of the subjective forms......Embodied Visions presents a groundbreaking analysis of film through the lens of bioculturalism, revealing how human biology as well as human culture determine how films are made and experienced. Throughout the book the author uses the breakthroughs of modern brain science to explain general...

  18. Visión binocular : diagnóstico y tratamiento

    OpenAIRE

    Borràs García, M. Rosa

    1996-01-01

    Este libro está dirigido a todos los profesionales del campo de la optometría que quieran profundizar en la visión binocular. También está indicado para los alumnos de tercer curso de Optometría, tanto en asignaturas troncales como optativas. Sus contenidos están divididos en capítulos que pueden ser leídos de forma independiente, aunque es recomendable comprender el presente texto como una unidad. Su estructura abarca desde las disfunciones binoculares más frecuentes al estrabismo, la amblio...

  19. Implementation of an ISIS Compatible Stereo Processing Chain for 3D Stereo Reconstruction

    Science.gov (United States)

    Tasdelen, E.; Unbekannt, H.; Willner, K.; Oberst, J.

    2012-09-01

    The department for Planetary Geodesy at TU Berlin is developing routines for photogrammetric processing of planetary image data to derive 3D representations of planetary surfaces. The ISIS software, developed by USGS, Flagstaff, is readily available, open source, and very well documented. Hence, ISIS [1] was chosen as a prime processing platform and tool kit. However, ISIS does not provide a full photogrammetric stereo processing chain. Several components like image matching, bundle block adjustment (until recently) or digital terrain model (DTM) interpolation from 3D object points are missing. Our group aims to complete this photogrammetric stereo processing chain by implementing the missing components, taking advantage of already existing ISIS classes and functionality. With this abstract we would like to report on the development of a new image matching software that is optimized for both orbital and closeranged planetary images and compatible with ISIS formats and routines and an interpolation tool that is developed to create DTMs from large 3-D point clouds.

  20. Pediatric Low Vision

    Science.gov (United States)

    ... Asked Questions Español Condiciones Chinese Conditions Pediatric Low Vision What is Low Vision? Partial vision loss that cannot be corrected causes ... and play. What are the signs of Low Vision? Some signs of low vision include difficulty recognizing ...

  1. No-reference stereoscopic image quality measurement based on generalized local ternary patterns of binocular energy response

    Science.gov (United States)

    Zhou, Wujie; Yu, Lu

    2015-09-01

    Perceptual no-reference (NR) quality measurement of stereoscopic images has become a challenging issue in three-dimensional (3D) imaging fields. In this article, we propose an efficient binocular quality-aware features extraction scheme, namely generalized local ternary patterns (GLTP) of binocular energy response, for general-purpose NR stereoscopic image quality measurement (SIQM). More specifically, we first construct the binocular energy response of a distorted stereoscopic image with different stimuli of amplitude and phase shifts. Then, the binocular quality-aware features are generated from the GLTP of the binocular energy response. Finally, these features are mapped to the subjective quality score of the distorted stereoscopic image by using support vector regression. Experiments on two publicly available 3D databases confirm the effectiveness of the proposed metric compared with the state-of-the-art full reference and NR metrics.

  2. No-reference stereoscopic image quality measurement based on generalized local ternary patterns of binocular energy response

    International Nuclear Information System (INIS)

    Zhou, Wujie; Yu, Lu

    2015-01-01

    Perceptual no-reference (NR) quality measurement of stereoscopic images has become a challenging issue in three-dimensional (3D) imaging fields. In this article, we propose an efficient binocular quality-aware features extraction scheme, namely generalized local ternary patterns (GLTP) of binocular energy response, for general-purpose NR stereoscopic image quality measurement (SIQM). More specifically, we first construct the binocular energy response of a distorted stereoscopic image with different stimuli of amplitude and phase shifts. Then, the binocular quality-aware features are generated from the GLTP of the binocular energy response. Finally, these features are mapped to the subjective quality score of the distorted stereoscopic image by using support vector regression. Experiments on two publicly available 3D databases confirm the effectiveness of the proposed metric compared with the state-of-the-art full reference and NR metrics. (paper)

  3. Visions and visioning in foresight activities

    DEFF Research Database (Denmark)

    Jørgensen, Michael Søgaard; Grosu, Dan

    2007-01-01

    or not work. The theoretical part of the paper presents an actor-network theory approach to the analyses of visions and visioning processes, where the shaping of the visions and the visioning and what has made them work or not work is analysed. The empirical part is based on analyses of the roles of visions...... important aspect of visioning processes include the types of actors participating in the processes and the types of expertise included in the processes (scientific, lay, business etc.). The empirical part of the paper analyses eight national foresight activities from Denmark, Germany, Hungary, Malta...

  4. Multispectral and stereo imaging on Mars

    Science.gov (United States)

    Levinthal, E. C.; Huck, F. O.

    1976-01-01

    Relevant aspects of the design and function of the two-window Viking Landing Camera system are described, with particular reference to some results of its operation on Mars during the Viking mission. A major feature of the system is that the optical tunnel between the lens and the photosensor array contains a multiaperture baffle designed to reduce veiling glare and to attenuate radio frequency interference from the lander antennas. The principle of operation of the contour mode is described. The accuracy is limited by the stereo base, resolution of camera picture elements, and geometric calibration. To help determine the desirability as well as the safety of possible sample sites, use is made of both radiometric and photogrammetric information for each picture element to combine high-resolution pictures with low-resolution color pictures of the same area. Explanatory photographs supplement the text.

  5. Robust photometric stereo using structural light sources

    Science.gov (United States)

    Han, Tian-Qi; Cheng, Yue; Shen, Hui-Liang; Du, Xin

    2014-05-01

    We propose a robust photometric stereo method by using structural arrangement of light sources. In the arrangement, light sources are positioned on a planar grid and form a set of collinear combinations. The shadow pixels are detected by adaptive thresholding. The specular highlight and diffuse pixels are distinguished according to their intensity deviations of the collinear combinations, thanks to the special arrangement of light sources. The highlight detection problem is cast as a pattern classification problem and is solved using support vector machine classifiers. Considering the possible misclassification of highlight pixels, the ℓ1 regularization is further employed in normal map estimation. Experimental results on both synthetic and real-world scenes verify that the proposed method can robustly recover the surface normal maps in the case of heavy specular reflection and outperforms the state-of-the-art techniques.

  6. Explaining polarization reversals in STEREO wave data

    Science.gov (United States)

    Breneman, A.; Cattell, C.; Wygant, J.; Kersten, K.; Wilson, L. B., III; Dai, L.; Colpitts, C.; Kellogg, P. J.; Goetz, K.; Paradise, A.

    2012-04-01

    Recently, Breneman et al. (2011) reported observations of large amplitude lightning and transmitter whistler mode waves from two STEREO passes through the inner radiation belt (L plane transverse to the magnetic field showed that the transmitter waves underwent periodic polarization reversals. Specifically, their polarization would cycle through a pattern of right-hand to linear to left-hand polarization at a rate of roughly 200 Hz. The lightning whistlers were observed to be left-hand polarized at frequencies greater than the lower hybrid frequency and less than the transmitter frequency (21.4 kHz) and right-hand polarized otherwise. Only right-hand polarized waves in the inner radiation belt should exist in the frequency range of the whistler mode and these reversals were not explained in the previous paper. We show, with a combination of observations and simulated wave superposition, that these polarization reversals are due to the beating of an incident electromagnetic whistler mode wave at 21.4 kHz and linearly polarized, symmetric lower hybrid sidebands Doppler-shifted from the incident wave by ±200 Hz. The existence of the lower hybrid waves is consistent with the parametric decay mechanism of Lee and Kuo (1984) whereby an incident whistler mode wave decays into symmetric, short wavelength lower hybrid waves and a purely growing (zero-frequency) mode. Like the lower hybrid waves, the purely growing mode is Doppler-shifted by ˜200 Hz as observed on STEREO. This decay mechanism in the upper ionosphere has been previously reported at equatorial latitudes and is thought to have a direct connection with explosive spread F enhancements. As such it may represent another dissipation mechanism of VLF wave energy in the ionosphere and may help to explain a deficit of observed lightning and transmitter energy in the inner radiation belts as reported by Starks et al. (2008).

  7. Extending the Stabilized Supralinear Network model for binocular image processing.

    Science.gov (United States)

    Selby, Ben; Tripp, Bryan

    2017-06-01

    The visual cortex is both extensive and intricate. Computational models are needed to clarify the relationships between its local mechanisms and high-level functions. The Stabilized Supralinear Network (SSN) model was recently shown to account for many receptive field phenomena in V1, and also to predict subtle receptive field properties that were subsequently confirmed in vivo. In this study, we performed a preliminary exploration of whether the SSN is suitable for incorporation into large, functional models of the visual cortex, considering both its extensibility and computational tractability. First, whereas the SSN receives abstract orientation signals as input, we extended it to receive images (through a linear-nonlinear stage), and found that the extended version behaved similarly. Secondly, whereas the SSN had previously been studied in a monocular context, we found that it could also reproduce data on interocular transfer of surround suppression. Finally, we reformulated the SSN as a convolutional neural network, and found that it scaled well on parallel hardware. These results provide additional support for the plausibility of the SSN as a model of lateral interactions in V1, and suggest that the SSN is well suited as a component of complex vision models. Future work will use the SSN to explore relationships between local network interactions and sophisticated vision processes in large networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Robot vision

    International Nuclear Information System (INIS)

    Hall, E.L.

    1984-01-01

    Almost all industrial robots use internal sensors such as shaft encoders which measure rotary position, or tachometers which measure velocity, to control their motions. Most controllers also provide interface capabilities so that signals from conveyors, machine tools, and the robot itself may be used to accomplish a task. However, advanced external sensors, such as visual sensors, can provide a much greater degree of adaptability for robot control as well as add automatic inspection capabilities to the industrial robot. Visual and other sensors are now being used in fundamental operations such as material processing with immediate inspection, material handling with adaption, arc welding, and complex assembly tasks. A new industry of robot vision has emerged. The application of these systems is an area of great potential

  9. Vision Screening

    Science.gov (United States)

    1993-01-01

    The Visi Screen OSS-C, marketed by Vision Research Corporation, incorporates image processing technology originally developed by Marshall Space Flight Center. Its advantage in eye screening is speed. Because it requires no response from a subject, it can be used to detect eye problems in very young children. An electronic flash from a 35 millimeter camera sends light into a child's eyes, which is reflected back to the camera lens. The photorefractor then analyzes the retinal reflexes generated and produces an image of the child's eyes, which enables a trained observer to identify any defects. The device is used by pediatricians, day care centers and civic organizations that concentrate on children with special needs.

  10. Method for Stereo Mapping Based on Objectarx and Pipeline Technology

    Science.gov (United States)

    Liu, F.; Chen, T.; Lin, Z.; Yang, Y.

    2012-07-01

    Stereo mapping is an important way to acquire 4D production. Based on the development of the stereo mapping and the characteristics of ObjectARX and pipeline technology, a new stereo mapping scheme which can realize the interaction between the AutoCAD and digital photogrammetry system is offered by ObjectARX and pipeline technology. An experiment is made in order to make sure the feasibility with the example of the software MAP-AT (Modern Aerial Photogrammetry Automatic Triangulation), the experimental results show that this scheme is feasible and it has very important meaning for the realization of the acquisition and edit integration.

  11. METHOD FOR STEREO MAPPING BASED ON OBJECTARX AND PIPELINE TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    F. Liu

    2012-07-01

    Full Text Available Stereo mapping is an important way to acquire 4D production. Based on the development of the stereo mapping and the characteristics of ObjectARX and pipeline technology, a new stereo mapping scheme which can realize the interaction between the AutoCAD and digital photogrammetry system is offered by ObjectARX and pipeline technology. An experiment is made in order to make sure the feasibility with the example of the software MAP-AT (Modern Aerial Photogrammetry Automatic Triangulation, the experimental results show that this scheme is feasible and it has very important meaning for the realization of the acquisition and edit integration.

  12. Blindness and vision loss

    Science.gov (United States)

    ... life. Alternative Names Loss of vision; No light perception (NLP); Low vision; Vision loss and blindness Images ... ADAM Health Solutions. About MedlinePlus Site Map FAQs Customer Support Get email updates Subscribe to RSS Follow ...

  13. Impairments to Vision

    Science.gov (United States)

    ... an external Non-Government web site. Impairments to Vision Normal Vision Diabetic Retinopathy Age-related Macular Degeneration In this ... pictures, fixate on the nose to simulate the vision loss. In diabetic retinopathy, the blood vessels in ...

  14. All Vision Impairment

    Science.gov (United States)

    ... Prevalence Rates for Vision Impairment by Age and Race/Ethnicity Table for 2010 U.S. Age-Specific Prevalence ... Ethnicity 2010 Prevalence Rates of Vision Impairment by Race Table for 2010 Prevalence Rates of Vision Impairment ...

  15. Current state of the art of vision based SLAM

    Science.gov (United States)

    Muhammad, Naveed; Fofi, David; Ainouz, Samia

    2009-02-01

    The ability of a robot to localise itself and simultaneously build a map of its environment (Simultaneous Localisation and Mapping or SLAM) is a fundamental characteristic required for autonomous operation of the robot. Vision Sensors are very attractive for application in SLAM because of their rich sensory output and cost effectiveness. Different issues are involved in the problem of vision based SLAM and many different approaches exist in order to solve these issues. This paper gives a classification of state-of-the-art vision based SLAM techniques in terms of (i) imaging systems used for performing SLAM which include single cameras, stereo pairs, multiple camera rigs and catadioptric sensors, (ii) features extracted from the environment in order to perform SLAM which include point features and line/edge features, (iii) initialisation of landmarks which can either be delayed or undelayed, (iv) SLAM techniques used which include Extended Kalman Filtering, Particle Filtering, biologically inspired techniques like RatSLAM, and other techniques like Local Bundle Adjustment, and (v) use of wheel odometry information. The paper also presents the implementation and analysis of stereo pair based EKF SLAM for synthetic data. Results prove the technique to work successfully in the presence of considerable amounts of sensor noise. We believe that state of the art presented in the paper can serve as a basis for future research in the area of vision based SLAM. It will permit further research in the area to be carried out in an efficient and application specific way.

  16. MOBILE STEREO-MAPPER: A PORTABLE KIT FOR UNMANNED AERIAL VEHICLES

    Directory of Open Access Journals (Sweden)

    J. Li-Chee-Ming

    2012-09-01

    Full Text Available A low-cost portable light-weight mobile stereo-mapping system (MSMS is under development in the GeoICT Lab, Geomatics Engineering program at York University. The MSMS is designed for remote operation on board unmanned aerial vehicles (UAV for navigation and rapid collection of 3D spatial data. Pose estimation of the camera sensors is based on single frequency RTK-GPS, loosely coupled in a Kalman filter with MEMS-based IMU. The attitude and heading reference system (AHRS calculates orientation from the gyro data, aided by accelerometer and magnetometer data to compensate for gyro drift. Two low-cost consumer digital cameras are calibrated and time-synchronized with the GPS/IMU to provide direct georeferenced stereo vision, while a video camera is used for navigation. Object coordinates are determined using rigorous photogrammetric solutions supported by direct georefencing algorithms for accurate pose estimation of the camera sensors. Before the MSMS is considered operational its sensor components and the integrated system itself has to undergo a rigorous calibration process to determine systematic errors and biases and to determine the relative geometry of the sensors. In this paper, the methods and results for system calibration, including camera, boresight and leverarm calibrations are presented. An overall accuracy assessment of the calibrated system is given using a 3D test field.

  17. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments.

    Science.gov (United States)

    Ramon Soria, Pablo; Arrue, Begoña C; Ollero, Anibal

    2017-01-07

    The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors.

  18. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Pablo Ramon Soria

    2017-01-01

    Full Text Available The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors.

  19. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments

    Science.gov (United States)

    Ramon Soria, Pablo; Arrue, Begoña C.; Ollero, Anibal

    2017-01-01

    The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors. PMID:28067851

  20. Quantitative visual fields under binocular viewing conditions in primary and consecutive divergent strabismus

    NARCIS (Netherlands)

    Joosse, M. V.; Simonsz, H. J.; van Minderhout, E. M.; Mulder, P. G.; de Jong, P. T.

    1999-01-01

    Although there have been a number of studies on the size of the suppression scotoma in divergent strabismus, there have been no reports on the full extent (i.e. size as well as depth) of this scotoma. Binocular static perimetry was used to measure suppression scotomas in five patients with primary

  1. The impact of stimulus complexity and frequency swapping on stabilization of binocular rivalry

    DEFF Research Database (Denmark)

    Sandberg, Kristian; Bahrami, B; Lindeløv, Jonas Kristoffer

    2011-01-01

    Binocular rivalry occurs when an image is presented to one eye while at the same time another, incongruent, image is presented to the other eye in the corresponding retinotopic location and conscious perception alternates spontaneously between the two monocular views. If a short blank period is i...

  2. Image-based and eye-based influences on binocular rivalry have similar spatial profiles

    NARCIS (Netherlands)

    Stuit, Sjoerd; Brascamp, Jan; Barendregt, Maurits; van der Smagt, Maarten; te Pas, Susan

    2017-01-01

    Binocular rivalry occurs when the images presented to the two eyes do not match. Instead of fusing into a stable percept, perception during rivalry alternates between images over time. However, during rivalry, perception can also resemble a patchwork of parts of both eyes' images. Such integration

  3. Human cortical neural correlates of visual fatigue during binocular depth perception: An fNIRS study.

    Directory of Open Access Journals (Sweden)

    Tingting Cai

    Full Text Available Functional near-infrared spectroscopy (fNIRS was adopted to investigate the cortical neural correlates of visual fatigue during binocular depth perception for different disparities (from 0.1° to 1.5°. By using a slow event-related paradigm, the oxyhaemoglobin (HbO responses to fused binocular stimuli presented by the random-dot stereogram (RDS were recorded over the whole visual dorsal area. To extract from an HbO curve the characteristics that are correlated with subjective experiences of stereopsis and visual fatigue, we proposed a novel method to fit the time-course HbO curve with various response functions which could reflect various processes of binocular depth perception. Our results indicate that the parietal-occipital cortices are spatially correlated with binocular depth perception and that the process of depth perception includes two steps, associated with generating and sustaining stereovision. Visual fatigue is caused mainly by generating stereovision, while the amplitude of the haemodynamic response corresponding to sustaining stereovision is correlated with stereopsis. Combining statistical parameter analysis and the fitted time-course analysis, fNIRS could be a promising method to study visual fatigue and possibly other multi-process neural bases.

  4. The time course of binocular rivalry during the phases of the menstrual cycle

    Science.gov (United States)

    Sy, Jocelyn L.; Tomarken, Andrew J.; Patel, Vaama; Blake, Randolph

    2016-01-01

    Binocular rivalry occurs when markedly different inputs to the two eyes initiate alternations in perceptual dominance between the two eyes' views. A link between individual differences in perceptual dynamics of rivalry and concentrations of GABA, a prominent inhibitory neurotransmitter in the brain, has highlighted binocular rivalry as a potential tool to investigate inhibitory processes in the brain. The present experiment investigated whether previously reported fluctuations of GABA concentrations in a healthy menstrual cycle (Epperson et al., 2002) also are associated with measurable changes in rivalry dynamics within individuals. We obtained longitudinal measures of alternation rate, dominance, and mixture durations in 300 rivalry tracking blocks measured over 5 weeks from healthy female participants who monitored the start of the follicular and luteal phases of their cycle. Although we demonstrate robust and stable individual differences in rivalry dynamics, across analytic approaches and dependent measures, we found no significant change or even trends across menstrual phases in the temporal dynamics of dominance percepts. We found only sparse between-phase differences in skew and kurtosis on mixture percepts when data were pooled across sessions and blocks. These results suggest a complex dynamic between hormonal steroids, binocular rivalry, and GABAeric signaling in the brain and thus implicate the need to consider a systemic perspective when linking GABA with perceptual alternations in binocular rivalry. PMID:28006072

  5. Comparison of Performance on a Tracking Task Utilizing Binocular, Dominant and Non-Dominant Viewing.

    Science.gov (United States)

    1980-03-01

    clearness of detail, changes in colour , lights and shadows, movement parallax and accommodation (Postman and Egan, 1949). C. MONOCULAR AND BINOCULAR...Development, New York, John Wiley and Sons, 1976. Mood , Graybill and Boes, Introduction to Theory of Statistics New York, McGraw-Hill Book Co., 1976. Ostle, B

  6. CONSTRUCTION OF A SYSTEM FOR DEFINING AREAS WHICH ARE NOT OBTAINED DATA FROM STEREO IMAGES

    Directory of Open Access Journals (Sweden)

    H. Yanagi

    2012-07-01

    Full Text Available Recently, digital documentation and visualization of various cultural assets have been receiving attention. For example, a small Buddha with a height of approximately 4 cm is categorized as a cultural asset. Such a small object has to be documented. Generally, in order to perform 3D modeling of small objects from the viewpoint of digital very close range photogrammetry, multi view images are taken. However, it is important to confirm the occluded part and image quality after taking images. In order to confirm the defocusing of images and the areas which were not obtained data from stereo images, a system is proposed for confirming data with a macro lens and convenient 3D measurement software called 3DiVision. Finally, it is proposed that the data be supplemented in areas which are not obtained data.

  7. Construction of a System for Defining Areas which are not Obtained Data from Stereo Images

    Science.gov (United States)

    Yanagi, H.; Chikatsu, H.

    2012-07-01

    Recently, digital documentation and visualization of various cultural assets have been receiving attention. For example, a small Buddha with a height of approximately 4 cm is categorized as a cultural asset. Such a small object has to be documented. Generally, in order to perform 3D modeling of small objects from the viewpoint of digital very close range photogrammetry, multi view images are taken. However, it is important to confirm the occluded part and image quality after taking images. In order to confirm the defocusing of images and the areas which were not obtained data from stereo images, a system is proposed for confirming data with a macro lens and convenient 3D measurement software called 3DiVision. Finally, it is proposed that the data be supplemented in areas which are not obtained data.

  8. A depth estimation method based on geometric transformation for stereo light microscope.

    Science.gov (United States)

    Fan, Shengli; Yu, Mei; Wang, Yigang; Jiang, Gangyi

    2014-01-01

    Stereo light microscopes (SLM) with narrow vision and shallow depth of field are widely used in micro-domain research. In this paper, we propose a depth estimation method of micro objects based on geometric transformation. By analyzing the optical imaging geometry, the definition of geometric transformation distance is given and the depth-distance relation express is obtained. The parameters of geometric transformation and express are calibrated with calibration board images captured in aid of precise motorized stage. The depth of micro object can be estimated by calculating the geometric transformation distance. The proposed depth-distance relation express is verified using an experiment in which the depth map of an Olanzapine tablet surface is reconstructed.

  9. Dynamic Shape Capture of Free-Swimming Aquatic Life using Multi-view Stereo

    Science.gov (United States)

    Daily, David

    2017-11-01

    The reconstruction and tracking of swimming fish in the past has either been restricted to flumes, small volumes, or sparse point tracking in large tanks. The purpose of this research is to use an array of cameras to automatically track 50-100 points on the surface of a fish using the multi-view stereo computer vision technique. The method is non-invasive thus allowing the fish to swim freely in a large volume and to perform more advanced maneuvers such as rolling, darting, stopping, and reversing which have not been studied. The techniques for obtaining and processing the 3D kinematics and maneuvers of tuna, sharks, stingrays, and other species will be presented and compared. The National Aquarium and the Naval Undersea Warfare Center and.

  10. Teater (stereo)tüüpide loojana / Anneli Saro

    Index Scriptorium Estoniae

    Saro, Anneli, 1968-

    2006-01-01

    Tutvustatakse 27. märtsil Tartu Ülikooli ajaloomuuseumis toimuva Eesti Teatriuurijate Ühenduse ning TÜ teatriteaduse ja kirjandusteooria õppetooli korraldatud konverentsi "Teater sotsiaalsete ja kultuuriliste (stereo)tüüpide loojana" teemasid

  11. MISR Level 2 TOA/Cloud Stereo parameters V002

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the Level 2 TOA/Cloud Stereo Product. It contains the Stereoscopically Derived Cloud Mask (SDCM), cloud winds, Reflecting Level Reference Altitude (RLRA),...

  12. Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Torsten Palfner

    2003-06-01

    Full Text Available In this paper, a compression algorithm is introduced which allows the efficient storage and transmission of stereo images. The coder uses a block-based disparity estimation/ compensation technique to decorrelate the image pair. To code both images progressively, we have adapted the wellknown SPIHT coder to stereo images. The results presented in this paper are better than any other results published so far.

  13. Stereo-separations of Peptides by Capillary Electrophoresis and Chromatography

    OpenAIRE

    sprotocols

    2014-01-01

    Authors: Afzal Hussain, Iqbal Hussain, Mohamed F. Al-Ajmi & Imran Ali ### Abstract Small peptides (di-, tri-, tetra- penta- hexa etc. and peptides) control many chemical and biological processes. The biological importance of stereomers of peptides is of great value. The stereo-separations of peptides are gaining importance in biological and medicinal sciences and pharmaceutical industries. There is a great need of experimental protocols of stereo-separations of peptides. The various c...

  14. Enhanced operator perception through 3D vision and haptic feedback

    Science.gov (United States)

    Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren

    2012-06-01

    Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.

  15. STEREO PHOTO HYDROFEL, A PROCESS OF MAKING SAID STEREO PHOTO HYDROGEL, POLYMERS FOR USE IN MAKING SUCH HYDROGEL AND A PHARMACEUTICAL COMPRISING SAID POLYMERS

    NARCIS (Netherlands)

    Hiemstra, C.; Zhong, Zhiyuan; Feijen, Jan

    2008-01-01

    The Invention relates to a stereo photo hydrogel formed by stereo complexed and photo cross-linked polymers, which polymers comprise at least two types of polymers having at least one hydrophilic component, at least one hydrophobic mutually stereo complexing component, and at least one of the types

  16. The STEREO IMPACT Suprathermal Electron (STE) Instrument

    Science.gov (United States)

    Lin, R. P.; Curtis, D. W.; Larson, D. E.; Luhmann, J. G.; McBride, S. E.; Maier, M. R.; Moreau, T.; Tindall, C. S.; Turin, P.; Wang, Linghua

    2008-04-01

    The Suprathermal Electron (STE) instrument, part of the IMPACT investigation on both spacecraft of NASA’s STEREO mission, is designed to measure electrons from ˜2 to ˜100 keV. This is the primary energy range for impulsive electron/3He-rich energetic particle events that are the most frequently occurring transient particle emissions from the Sun, for the electrons that generate solar type III radio emission, for the shock accelerated electrons that produce type II radio emission, and for the superhalo electrons (whose origin is unknown) that are present in the interplanetary medium even during the quietest times. These electrons are ideal for tracing heliospheric magnetic field lines back to their source regions on the Sun and for determining field line lengths, thus probing the structure of interplanetary coronal mass ejections (ICMEs) and of the ambient inner heliosphere. STE utilizes arrays of small, passively cooled thin window silicon semiconductor detectors, coupled to state-of-the-art pulse-reset front-end electronics, to detect electrons down to ˜2 keV with about 2 orders of magnitude increase in sensitivity over previous sensors at energies below ˜20 keV. STE provides energy resolution of Δ E/ E˜10 25% and the angular resolution of ˜20° over two oppositely directed ˜80°×80° fields of view centered on the nominal Parker spiral field direction.

  17. Stereo and regioselectivity in ''Activated'' tritium reactions

    International Nuclear Information System (INIS)

    Ehrenkaufer, R.L.E.; Hembree, W.C.; Wolf, A.P.

    1988-01-01

    To investigate the stereo and positional selectivity of the microwave discharge activation (MDA) method, the tritium labeling of several amino acids was undertaken. The labeling of L-valine and the diastereomeric pair L-isoleucine and L-alloisoleucine showed less than statistical labeling at the α-amino C-H position mostly with retention of configuration. Labeling predominated at the single β C-H tertiary (methyne) position. The labeling of L-valine and L-proline with and without positive charge on the α-amino group resulted in large increases in specific activity (greater than 10-fold) when positive charge was removed by labeling them as their sodium carboxylate salts. Tritium NMR of L-proline labeled both as its zwitterion and sodium salt showed also large differences in the tritium distribution within the molecule. The distribution preferences in each of the charge states are suggestive of labeling by an electrophilic like tritium species(s). 16 refs., 5 tabs

  18. Neuroticism modifies the association of vision impairment and cognition among community-dwelling older adults.

    Science.gov (United States)

    Gaynes, Bruce I; Shah, Raj; Leurgans, Sue; Bennett, David

    2013-01-01

    Vision impairment (best-corrected binocular visual acuity worse than 20/40) is a common age-related health condition requiring adaptation to maintain well-being. Whether neuroticism, a personality trait associated with decreased ability to adapt to change, modifies the association of vision impairment with worse cognition is uncertain. Using baseline visual acuity, neuroticism and cognitive function data from 714 community-dwelling, older participants in the Rush Memory and Aging Project, we examined whether self-reported neuroticism level modified the cross-sectional association between vision impairment and lower cognitive level. Women represented 76% of the participants. The mean age was 79.6 (SD = 6.9) years and the mean education level was 14.6 (SD = 2.9) years; 26% of the participants had vision impairment. In a linear regression model adjusted for age, sex and education, each unit higher in neuroticism level worsened the association between vision impairment and lower global cognitive function level (parameter estimate for vision impairment and neuroticism interaction term = -0.017; standard error = 0.005; p = 0.001). For participants with vision impairment, a high neuroticism level (50th percentile or above) was associated with a mean global cognitive score that was 0.297 z-score units lower than for participants with a low neuroticism level (p persons, neuroticism modifies the association between vision impairment and cognitive function level. Copyright © 2012 S. Karger AG, Basel.

  19. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2017-01-01

    Any exploration vehicle assembled or Spacecraft placed in LEO or GTO must pass through this debris cloud and survive. Large cross section, low thrust vehicles will spend more time spiraling out through the cloud and will suffer more impacts.Better knowledge of small debris will improve survival odds. Current estimated Density of debris at various orbital attitudes with notation of recent collisions and resulting spikes. Orbital Debris Tracking and Characterization has now been added to NASA Office of Chief Technologists Technology Development Roadmap in Technology Area 5 (TA5.7)[Orbital Debris Tracking and Characterization] and is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crews due to the risk of Orbital Debris damage to ISS Exploration vehicles. The Problem: Traditional orbital trackers looking for small, dim orbital derelicts and debris typically will stare at the stars and let any reflected light off the debris integrate in the imager for seconds, thus creating a streak across the image. The Solution: The Small Tracker will see Stars and other celestial objects rise through its Field of View (FOV) at the rotational rate of its orbit, but the glint off of orbital objects will move through the FOV at different rates and directions. Debris on a head-on collision course (or close) will stay in the FOV at 14 Km per sec. The Small Tracker can track at 60 frames per sec allowing up to 30 fixes before a near-miss pass. A Stereo pair of Small Trackers can provide range data within 5-7 Km for better orbit measurements.

  20. An iPod treatment of amblyopia: an updated binocular approach.

    Science.gov (United States)

    Hess, Robert F; Thompson, B; Black, J M; Machara, G; Zhang, P; Bobier, W R; Cooperstock, J

    2012-02-15

    We describe the successful translation of computerized and space-consuming laboratory equipment for the treatment of suppression to a small handheld iPod device (Apple iPod; Apple Inc., Cupertino, California). A portable and easily obtainable Apple iPod display, using current video technology offers an ideal solution for the clinical treatment of suppression. The following is a description of the iPod device and illustrates how a video game has been adapted to provide the appropriate stimulation to implement our recent antisuppression treatment protocol. One to 2 hours per day of video game playing under controlled conditions for 1 to 3 weeks can improve acuity and restore binocular function, including stereopsis in adults, well beyond the age at which traditional patching is used. This handheld platform provides a convenient and effective platform for implementing the newly proposed binocular treatment of amblyopia in the clinic, home, or elsewhere. American Optometric Association.

  1. PixonVision real-time video processor

    Science.gov (United States)

    Puetter, R. C.; Hier, R. G.

    2007-09-01

    PixonImaging LLC and DigiVision, Inc. have developed a real-time video processor, the PixonVision PV-200, based on the patented Pixon method for image deblurring and denoising, and DigiVision's spatially adaptive contrast enhancement processor, the DV1000. The PV-200 can process NTSC and PAL video in real time with a latency of 1 field (1/60 th of a second), remove the effects of aerosol scattering from haze, mist, smoke, and dust, improve spatial resolution by up to 2x, decrease noise by up to 6x, and increase local contrast by up to 8x. A newer version of the processor, the PV-300, is now in prototype form and can handle high definition video. Both the PV-200 and PV-300 are FPGA-based processors, which could be spun into ASICs if desired. Obvious applications of these processors include applications in the DOD (tanks, aircraft, and ships), homeland security, intelligence, surveillance, and law enforcement. If developed into an ASIC, these processors will be suitable for a variety of portable applications, including gun sights, night vision goggles, binoculars, and guided munitions. This paper presents a variety of examples of PV-200 processing, including examples appropriate to border security, battlefield applications, port security, and surveillance from unmanned aerial vehicles.

  2. Congruence analysis of point clouds from unstable stereo image sequences

    Directory of Open Access Journals (Sweden)

    C. Jepping

    2014-06-01

    Full Text Available This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis. For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  3. Enhancement Strategies for Frame-To Uas Stereo Visual Odometry

    Science.gov (United States)

    Kersten, J.; Rodehorst, V.

    2016-06-01

    Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.

  4. Composition of a vision screen for servicemembers with traumatic brain injury: consensus using a modified nominal group technique.

    Science.gov (United States)

    Radomski, Mary Vining; Finkelstein, Marsha; Llanos, Imelda; Scheiman, Mitchell; Wagener, Sharon Gowdy

    2014-01-01

    Vision impairment is common in the first year after traumatic brain injury (TBI), including among service members whose brain injuries occurred during deployment in Iraq and Afghanistan. Occupational therapy practitioners provide routine vision screening to inform treatment planning and referral to vision specialists, but existing methods are lacking because many tests were developed for children and do not screen for vision dysfunction typical of TBI. An expert panel was charged with specifying the composition of a vision screening protocol for servicemembers with TBI. A modified nominal group technique fostered discussion and objective determinations of consensus. After considering 29 vision tests, the panel recommended a nine-test vision screening that examines functional performance, self-reported problems, far-near acuity, reading, accommodation, convergence, eye alignment and binocular vision, saccades, pursuits, and visual fields. Research is needed to develop reliable, valid, and clinically feasible vision screening protocols to identify TBI-related vision disorders in adults. Copyright © 2014 by the American Occupational Therapy Association, Inc.

  5. Real-time registration of video with ultrasound using stereo disparity

    Science.gov (United States)

    Wang, Jihang; Horvath, Samantha; Stetten, George; Siegel, Mel; Galeotti, John

    2012-02-01

    Medical ultrasound typically deals with the interior of the patient, with the exterior left to the original medical imaging modality, direct human vision. For the human operator scanning the patient, the view of the external anatomy is essential for correctly locating the ultrasound probe on the body and making sense of the resulting ultrasound images in their proper anatomical context. The operator, after all, is not expected to perform the scan with his eyes shut. Over the past decade, our laboratory has developed a method of fusing these two information streams in the mind of the operator, the Sonic Flashlight, which uses a half silvered mirror and miniature display mounted on an ultrasound probe to produce a virtual image within the patient at its proper location. We are now interested in developing a similar data fusion approach within the ultrasound machine itself, by, in effect, giving vision to the transducer. Our embodiment of this concept consists of an ultrasound probe with two small video cameras mounted on it, with software capable of locating the surface of an ultrasound phantom using stereo disparity between the two video images. We report its first successful operation, demonstrating a 3D rendering of the phantom's surface with the ultrasound data superimposed at its correct relative location. Eventually, automated analysis of these registered data sets may permit the scanner and its associated computational apparatus to interpret the ultrasound data within its anatomical context, much as the human operator does today.

  6. Calculation method of CGH for Binocular Eyepiece-Type Electro Holography

    International Nuclear Information System (INIS)

    Yang, Chanyoung; Yoneyama, Takuo; Sakamoto, Yuji; Okuyama, Fumio

    2013-01-01

    We had researched about eyepiece-type electro holography to display 3-D images of larger objects at wider angle. We had enlarged visual field considering depth of object with Fourier optical system using two lenses. In this paper, we extend our system for binocular. In the binocular system, we use two different holograms for each eye. The 3-D image for left eye should be observed like the real object observed using left eye and the same for right eye. So, we propose a method of calculation of computer-generated hologram (CGH) transforming the coordinate system of the model data to make two holograms for binocular eyepiece-type electro holography. The coordinate system of original model data is called the world coordinate system. The left and the right coordinate system are transformed from the world coordinate system. We also propose the method for correcting the installation error that occurs when placing the electronic and optical devices. The installation error is calculated and the model data is corrected using the distance between measured position and setup position of the reconstructed image Optical reconstruction experiments were carried out to verify the proposed method.

  7. Quantitative measurement of binocular color fusion limit for non-spectral colors.

    Science.gov (United States)

    Jung, Yong Ju; Sohn, Hosik; Lee, Seong-il; Ro, Yong Man; Park, Hyun Wook

    2011-04-11

    Human perception becomes difficult in the event of binocular color fusion when the color difference presented for the left and right eyes exceeds a certain threshold value, known as the binocular color fusion limit. This paper discusses the binocular color fusion limit for non-spectral colors within the color gamut of a conventional LCD 3DTV. We performed experiments to measure the color fusion limit for eight chromaticity points sampled from the CIE 1976 chromaticity diagram. A total of 2480 trials were recorded for a single observer. By analyzing the results, the color fusion limit was quantified by ellipses in the chromaticity diagram. The semi-minor axis of the ellipses ranges from 0.0415 to 0.0923 in terms of the Euclidean distance in the u'v´ chromaticity diagram and the semi-major axis ranges from 0.0640 to 0.1560. These eight ellipses are drawn on the chromaticity diagram. © 2011 Optical Society of America

  8. A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system

    Science.gov (United States)

    Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan

    2018-01-01

    This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.

  9. Clinical Outcomes after Binocular Implantation of a New Trifocal Diffractive Intraocular Lens

    Directory of Open Access Journals (Sweden)

    Florian T. A. Kretz

    2015-01-01

    Full Text Available Purpose. To evaluate visual, refractive, and contrast sensitivity outcomes, as well as the incidence of pseudophakic photic phenomena and patient satisfaction after bilateral diffractive trifocal intraocular lens (IOL implantation. Methods. This prospective nonrandomized study included consecutive patients undergoing cataract surgery with bilateral implantation of a diffractive trifocal IOL (AT LISA tri 839MP, Carl Zeiss Meditec. Distance, intermediate, and near visual outcomes were evaluated as well as the defocus curve and the refractive outcomes 3 months after surgery. Photopic and mesopic contrast sensitivity, patient satisfaction, and halo perception were also evaluated. Results. Seventy-six eyes of 38 patients were included; 90% of eyes showed a spherical equivalent within ±0.50 diopters 3 months after surgery. All patients had a binocular uncorrected distance visual acuity of 0.00 LogMAR or better and a binocular uncorrected intermediate visual acuity of 0.10 LogMAR or better, 3 months after surgery. Furthermore, 85% of patients achieved a binocular uncorrected near visual acuity of 0.10 LogMAR or better. Conclusions. Trifocal diffractive IOL implantation seems to provide an effective restoration of visual function for far, intermediate, and near distances, providing high levels of visual quality and patient satisfaction.

  10. Refinement of facial reconstructive surgery by stereo-model planning.

    Science.gov (United States)

    Cheung, L K; Wong, M C M; Wong, L L S

    2002-10-01

    The development of rapid prototyping has evolved from crude milled models to laser polymerized stereolithographic models of excellent accuracy. The technology was advanced further with the recent introduction of fused deposition modelling and a three-dimensional ink-jet printing technique in stereo-model fabrication. The concept of using a three-dimensional model in planning the operation has amazed maxillofacial surgeons since its first application in grafting a skull defect in 1995. It was followed by many bright ideas for applications in the field of facial reconstructive surgery. Stereo-models may assist in diagnosis of facial fractures, joint ankylosis and even impacted teeth. The surgery can be simulated prior the operation of complex craniofacial syndromes, facial asymmetry and distraction osteogenesis. The stereo-model can be used for preparation of reconstructive plate or joint prostheses. It is of enormous value for educational teaching and as a patient information tool when obtaining the consent for surgery.

  11. A dual-adaptive support-based stereo matching algorithm

    Science.gov (United States)

    Zhang, Yin; Zhang, Yun

    2017-07-01

    Many stereo matching algorithms use fixed color thresholds and a rigid cross skeleton to segment supports (viz., Cross method), which, however, does not work well for different images. To address this issue, this paper proposes a novel dual adaptive support (viz., DAS)-based stereo matching method, which uses both appearance and shape information of a local region to segment supports automatically, and, then, integrates the DAS-based cost aggregation with the absolute difference plus census transform cost, scanline optimization and disparity refinement to develop a stereo matching system. The performance of the DAS method is also evaluated in the Middlebury benchmark and by comparing with the Cross method. The results show that the average error for the DAS method 25.06% lower than that for the Cross method, indicating that the proposed method is more accurate, with fewer parameters and suitable for parallel computing.

  12. Using Weightless Neural Networks for Vergence Control in an Artificial Vision System

    Directory of Open Access Journals (Sweden)

    Karin S. Komati

    2003-01-01

    Full Text Available This paper presents a methodology we have developed and used to implement an artificial binocular vision system capable of emulating the vergence of eye movements. This methodology involves using weightless neural networks (WNNs as building blocks of artificial vision systems. Using the proposed methodology, we have designed several architectures of WNN-based artificial vision systems, in which images captured by virtual cameras are used for controlling the position of the ‘foveae’ of these cameras (high-resolution region of the images captured. Our best architecture is able to control the foveae vergence movements with average error of only 3.58 image pixels, which is equivalent to an angular error of approximately 0.629°.

  13. Simultaneous capture of the color and topography of paintings using fringe encoded stereo vision

    NARCIS (Netherlands)

    Zaman, T.; Jonker, P.P.; Lenseigne, B.A.J.; Dik, J.

    2014-01-01

    Introduction: Paintings are versatile near-planar objects with material characteristics that vary widely. The fact that paint has a material presence is often overlooked, mostly due to the fact that we encounter many of these artworks through two dimensional reproductions. The capture of paintings

  14. Pavement Distress Evaluation Using 3D Depth Information from Stereo Vision

    Science.gov (United States)

    2012-07-01

    The focus of the current project funded by MIOH-UTC for the period 9/1/2010-8/31/2011 is to : enhance our earlier effort in providing a more robust image processing based pavement distress : detection and classification system. During the last few de...

  15. Development of an Image Fringe Zero Selection System for Structuring Elements with Stereo Vision Disparity Measurements

    International Nuclear Information System (INIS)

    Grindley, Josef E; Jiang Lin; Tickle, Andrew J

    2011-01-01

    When performing image operations involving Structuring Element (SE) and many transforms it is required that the outside of the image be padded with zeros or ones depending on the operation. This paper details how this can be achieved with simulated hardware using DSP Builder in Matlab with the intention of migrating the design to HDL (Hardware Description Language) and implemented on an FPGA (Field Programmable Gate Array). The design takes few resources and does not require extra memory to account for the change in size of the output image.

  16. Evaluation of stereo vision obstacle detection algorithms for off-road autonomous navigation

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry

    2005-01-01

    Reliable detection of non-traversable hazards is a key requirement for off-road autonomous navigation. A detailed description of each obstacle detection algorithm and their performance on the surveyed obstacle course is presented in this paper.

  17. New Stereo Vision Digital Camera System for Simultaneous Measurement of Cloud Base Height and Atmospheric Visibility

    Science.gov (United States)

    Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.

    2013-12-01

    Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that allow its cost to remain low even with its increased functionality. Also, a new control software was also developed to ensure that the two cameras are triggered simultaneously. This is a major requirement that affects the final uncertainty of the measurements due to the constant movement of the clouds in the sky. Since accurate orientation of the cameras can be a very demanding task in field deployments, an automated calibration procedure has been developed, that removes the need for an accurate alignment. It consists on photographing the stars, which do not exhibit parallax due to the long distances involved, and deducing the inherent misalignments of the two cameras. The known misalignments are then used to correct the cloud photos. These developments will be described in the detail, along with an uncertainty analysis of the measurement setup. Measurements of cloud base height and atmospheric visibility will be presented and compared with measurements from other in-situ instruments. This work was supported by FCT project PTDC/CTE-ATM/115833/2009 and Program COMPETE FCOMP-01-0124-FEDER-014508

  18. Stereo vision with texture learning for fault-tolerant automatic baling

    DEFF Research Database (Denmark)

    Blas, Morten Rufus; Blanke, Mogens

    2010-01-01

    This paper presents advances in using stereovision for automating baling. A robust classification scheme is demonstrated for learning and classifying based on texture and shape. Using a state-of-the-art texton approach a fast classifier is obtained that can handle non-linearities in the data......) by learning it's appearance. A 3D classifier is used to train and supervise the texture classifier....

  19. Cataract Vision Simulator

    Science.gov (United States)

    ... and Videos: What Do Cataracts Look Like? Cataract Vision Simulator Leer en Español: Simulador: Catarata Jun. 11, 2014 How do cataracts affect your vision? A cataract is a clouding of the eye's ...

  20. Living with Low Vision

    Science.gov (United States)

    ... Life TIPS To Its Fullest LIVING WITH LOW VISION Savings Medical Bills A VARIETY OF EYE CONDITIONS, ... which occupational therapy practitioners help people with low vision to function at the highest possible level. • Prevent ...

  1. Vision - night blindness

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/003039.htm Vision - night blindness To use the sharing features on ... page, please enable JavaScript. Night blindness is poor vision at night or in dim light. Considerations Night ...

  2. Chemicals Industry Vision

    Energy Technology Data Exchange (ETDEWEB)

    none,

    1996-12-01

    Chemical industry leaders articulated a long-term vision for the industry, its markets, and its technology in the groundbreaking 1996 document Technology Vision 2020 - The U.S. Chemical Industry. (PDF 310 KB).

  3. Corneal Transplantation in Disease Affecting Only One Eye: Does It Make a Difference to Habitual Binocular Viewing?

    Directory of Open Access Journals (Sweden)

    Praveen K Bandela

    Full Text Available Clarity of the transplanted tissue and restoration of visual acuity are the two primary metrics for evaluating the success of corneal transplantation. Participation of the transplanted eye in habitual binocular viewing is seldom evaluated post-operatively. In unilateral corneal disease, the transplanted eye may remain functionally inactive during binocular viewing due to its suboptimal visual acuity and poor image quality, vis-à-vis the healthy fellow eye.This study prospectively quantified the contribution of the transplanted eye towards habitual binocular viewing in 25 cases with unilateral transplants [40 yrs (IQR: 32-42 yrs and 25 age-matched controls [30 yrs (25-37 yrs]. Binocular functions including visual field extent, high-contrast logMAR acuity, suppression threshold and stereoacuity were assessed using standard psychophysical paradigms. Optical quality of all eyes was determined from wavefront aberrometry measurements. Binocular visual field expanded by a median 21% (IQR: 18-29% compared to the monocular field of cases and controls (p = 0.63. Binocular logMAR acuity [0.0 (0.0-0.0] almost always followed the fellow eye's acuity [0.00 (0.00 --0.02] (r = 0.82, independent of the transplanted eye's acuity [0.34 (0.2-0.5] (r = 0.04. Suppression threshold and stereoacuity were poorer in cases [30.1% (13.5-44.3%; 620.8 arc sec (370.3-988.2 arc sec] than in controls [79% (63.5-100%; 16.3 arc sec (10.6-25.5 arc sec] (p<0.001. Higher-order wavefront aberrations of the transplanted eye [0.34 μ (0.21-0.51 μ] were higher than the fellow eye [0.07 μ (0.05-0.11 μ] (p<0.001 and their reduction with RGP contact lenses [0.09 μ (0.08-0.12 μ] significantly improved the suppression threshold [65% (50-72%] and stereoacuity [56.6 arc sec (47.7-181.6 arc sec] (p<0.001.In unilateral corneal disease, the transplanted eye does participate in gross binocular viewing but offers limited support to fine levels of binocularity. Improvement in the transplanted

  4. Comparing Active Vision Models

    NARCIS (Netherlands)

    Croon, G.C.H.E. de; Sprinkhuizen-Kuyper, I.G.; Postma, E.O.

    2009-01-01

    Active vision models can simplify visual tasks, provided that they can select sensible actions given incoming sensory inputs. Many active vision models have been proposed, but a comparative evaluation of these models is lacking. We present a comparison of active vision models from two different

  5. Comparing active vision models

    NARCIS (Netherlands)

    Croon, G.C.H.E. de; Sprinkhuizen-Kuyper, I.G.; Postma, E.O.

    2009-01-01

    Active vision models can simplify visual tasks, provided that they can select sensible actions given incoming sensory inputs. Many active vision models have been proposed, but a comparative evaluation of these models is lacking. We present a comparison of active vision models from two different

  6. Implementation of a Self-Consistent Stereo Processing Chain for 3D Stereo Reconstruction of the Lunar Landing Sites

    Science.gov (United States)

    Tasdelen, E.; Willner, K.; Unbekannt, H.; Glaeser, P.; Oberst, J.

    2014-04-01

    The department for Planetary Geodesy at Technical University Berlin is developing routines for photogrammetric processing of planetary image data to derive 3D representations of planetary surfaces. The Integrated Software for Imagers and Spectrometers (ISIS) software (Anderson et al., 2004), developed by USGS, Flagstaff, is readily available, open source, and very well documented. Hence, ISIS was chosen as a prime processing platform and tool kit. However, ISIS does not provide a full photogrammetric stereo processing chain. Several components like image matching, bundle block adjustment (until recently) or digital terrain model (DTM) interpolation from 3D object points are missing. Our group aims to complete this photogrammetric stereo processing chain by implementing the missing components, taking advantage of already existing ISIS classes and functionality. We report here on the current status of the development of our stereo processing chain and its first application on the Lunar Apollo landing sites.

  7. Association between hearing and vision impairments in older adults.

    Science.gov (United States)

    Schneck, Marilyn E; Lott, Lori A; Haegerstrom-Portnoy, Gunilla; Brabyn, John A

    2012-01-01

    To determine which, if any, vision variables are associated with moderate bilateral hearing loss in an elderly population. Four hundred and forty-six subjects completed a hearing screening in conjunction with measurements on a variety of vision tests including high contrast acuity, low contrast acuity measured under a variety of lighting conditions, contrast sensitivity, stereopsis, and colour vision. Logistic regression analyses were used to assess the relationship between various vision variables and hearing impairment while controlling for demographic and other co-morbid conditions. In this sample of older adults with a mean age of 79.9 years, 5.4% of individuals were moderately visually impaired (binocular high contrast VA worse than 0.54 logMAR, Snellen equivalent 6/21 or 20/70) and 12.8% were moderately bilaterally hearing impaired (hearing none of the 40 dB tones at 500, 2000 or 4000 Hz in either ear). Three measures of low contrast acuity, but not high contrast acuity or other vision measures, were significantly associated with hearing loss when controlling for age, cataract surgery history, glaucoma history and self reported stroke, all of which were significantly associated with hearing loss, although the association of glaucoma with hearing loss was negative. Poorer vision for low contrast targets was associated with an increased risk of hearing impairment in older adults. Audiologists and optometrists should enquire about the other sense in cases in which a deficit is measured as individuals with dual sensory loss are at a marked disadvantage in daily life. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  8. Adjustment to vision loss in a mixed sample of adults with established visual impairment.

    Science.gov (United States)

    Tabrett, Daryl R; Latham, Keziah

    2012-10-19

    To determine factors associated with the level of adjustment to vision loss in a cross-sectional sample of adults with mixed visual impairment. One hundred participants were administered the Acceptance and Self-Worth Adjustment Scale (AS-WAS) to assess adjustment to vision loss. The severity of vision loss was determined using binocular clinical visual function assessments including visual acuity, contrast sensitivity, reading performance, and visual fields. Key demographics including age, duration of visual impairment, general health, education, and living arrangements were evaluated, as were self-reported vision-related activity limitation (VRAL), depression, social support, and personality. Multivariate analysis showed that higher levels of depressive symptoms (β = -0.26, P personality trait neuroticism (β = -0.33, P personality trait of conscientiousness (β = 0.29, P personality (specifically neuroticism and conscientiousness), independent of the severity of vision loss, VRAL, and duration of vision loss. The results suggest certain individuals may be predisposed to exhibiting less adjustment to vision loss due to personality characteristics, and exhibit poorer adjustment owing to or as a consequence of depression, rather than due to other factors such as the onset and severity of visual impairment.

  9. Automated segmentation of the optic disc from stereo color photographs using physiologically plausible features

    NARCIS (Netherlands)

    Abramoff, M.D.; Alward, W.L.M.; Greenlee, E.C.; Shuba, L.; Kim, Chan Y.; Fingert, J.H.; Kwon, Y.H.

    2007-01-01

    PURPOSE. To evaluate a novel automated segmentation algorithm for cup-to-disc segmentation from stereo color photographs of patients with glaucoma for the measurement of glaucoma progression. METHODS. Stereo color photographs of the optic disc were obtained by using a fixed stereo-base fundus

  10. Computational approaches to vision

    Science.gov (United States)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  11. A child's vision.

    Science.gov (United States)

    Nye, Christina

    2014-06-01

    Implementing standard vision screening techniques in the primary care practice is the most effective means to detect children with potential vision problems at an age when the vision loss may be treatable. A critical period of vision development occurs in the first few weeks of life; thus, it is imperative that serious problems are detected at this time. Although it is not possible to quantitate an infant's vision, evaluating ocular health appropriately can mean the difference between sight and blindness and, in the case of retinoblastoma, life or death. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Colour vision deficiency.

    Science.gov (United States)

    Simunovic, M P

    2010-05-01

    Colour vision deficiency is one of the commonest disorders of vision and can be divided into congenital and acquired forms. Congenital colour vision deficiency affects as many as 8% of males and 0.5% of females--the difference in prevalence reflects the fact that the commonest forms of congenital colour vision deficiency are inherited in an X-linked recessive manner. Until relatively recently, our understanding of the pathophysiological basis of colour vision deficiency largely rested on behavioural data; however, modern molecular genetic techniques have helped to elucidate its mechanisms. The current management of congenital colour vision deficiency lies chiefly in appropriate counselling (including career counselling). Although visual aids may be of benefit to those with colour vision deficiency when performing certain tasks, the evidence suggests that they do not enable wearers to obtain normal colour discrimination. In the future, gene therapy remains a possibility, with animal models demonstrating amelioration following treatment.

  13. VisGraB: A Benchmark for Vision-Based Grasping. Paladyn Journal of Behavioral Robotics

    DEFF Research Database (Denmark)

    Kootstra, Gert; Popovic, Mila; Jørgensen, Jimmy Alison

    2012-01-01

    that a large number of grasps can be executed and evaluated while dealing with dynamics and the noise and uncertainty present in the real world images. VisGraB enables a fair comparison among different grasping methods. The user furthermore does not need to deal with robot hardware, focusing on the vision......We present a database and a software tool, VisGraB, for benchmarking of methods for vision-based grasping of unknown objects with no prior object knowledge. The benchmark is a combined real-world and simulated experimental setup. Stereo images of real scenes containing several objects in different...

  14. A Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    Science.gov (United States)

    Buyens, Wim; van Dijk, Bas; Wouters, Jan; Moonen, Marc

    2015-10-01

    Listening to music is still one of the more challenging aspects of using a cochlear implant (CI) for most users. Simple musical structures, a clear rhythm/beat, and lyrics that are easy to follow are among the top factors contributing to music appreciation for CI users. Modifying the audio mix of complex music potentially improves music enjoyment in CI users. A stereo music preprocessing scheme is described in which vocals, drums, and bass are emphasized based on the representation of the harmonic and the percussive components in the input spectrogram, combined with the spatial allocation of instruments in typical stereo recordings. The scheme is assessed with postlingually deafened CI subjects (N = 7) using pop/rock music excerpts with different complexity levels. The scheme is capable of modifying relative instrument level settings, with the aim of improving music appreciation in CI users, and allows individual preference adjustments. The assessment with CI subjects confirms the preference for more emphasis on vocals, drums, and bass as offered by the preprocessing scheme, especially for songs with higher complexity. The stereo music preprocessing scheme has the potential to improve music enjoyment in CI users by modifying the audio mix in widespread (stereo) music recordings. Since music enjoyment in CI users is generally poor, this scheme can assist the music listening experience of CI users as a training or rehabilitation tool.

  15. Solving the uncalibrated photometric stereo problem using total variation

    DEFF Research Database (Denmark)

    Quéau, Yvain; Lauze, Francois Bernard; Durou, Jean-Denis

    2013-01-01

    In this paper we propose a new method to solve the problem of uncalibrated photometric stereo, making very weak assumptions on the properties of the scene to be reconstructed. Our goal is to solve the generalized bas-relief ambiguity (GBR) by performing a total variation regularization of both...

  16. People counting with stereo cameras : two template-based solutions

    NARCIS (Netherlands)

    Englebienne, Gwenn; van Oosterhout, Tim; Kröse, B.J.A.

    2012-01-01

    People counting is a challenging task with many applications. We propose a method with a fixed stereo camera that is based on projecting a template onto the depth image. The method was tested on a challenging outdoor dataset with good results and runs in real time.

  17. Wind-Sculpted Vicinity After Opportunity's Sol 1797 Drive (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11820 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11820 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 111 meters (364 feet) on the 1,797th Martian day, or sol, of Opportunity's surface mission (Feb. 12, 2009). North is at the center; south at both ends. This view is the right-eye member of a stereo pair presented as a cylindrical-perspective projection with geometric seam correction. Tracks from the drive recede northward across dark-toned sand ripples in the Meridiani Planum region of Mars. Patches of lighter-toned bedrock are visible on the left and right sides of the image. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). This view is presented as a cylindrical-perspective projection with geometric seam correction.

  18. Characterising atmospheric optical turbulence using stereo-SCIDAR

    Science.gov (United States)

    Osborn, James; Butterley, Tim; Föhring, Dora; Wilson, Richard

    2015-04-01

    Stereo-SCIDAR (SCIntillation Detection and Ranging) is a development to the well known SCIDAR method for characterisation of the Earth's atmospheric optical turbulence. Here we present some interesting capabilities, comparisons and results from a recent campaign on the 2.5 m Isaac Newton Telescope on La Palma.

  19. VPython: Python plus Animations in Stereo 3D

    Science.gov (United States)

    Sherwood, Bruce

    2004-03-01

    Python is a modern object-oriented programming language. VPython (http://vpython.org) is a combination of Python (http://python.org), the Numeric module from LLNL (http://www.pfdubois.com/numpy), and the Visual module created by David Scherer, all of which have been under continuous development as open source projects. VPython makes it easy to write programs that generate real-time, navigable 3D animations. The Visual module includes a set of 3D objects (sphere, cylinder, arrow, etc.), tools for creating other shapes, and support for vector algebra. The 3D renderer runs in a parallel thread, and animations are produced as a side effect of computations, freeing the programmer to concentrate on the physics. Applications include educational and research visualization. In the Fall of 2003 Hugh Fisher at the Australian National University, John Zelle at Wartburg College, and I contributed to a new stereo capability of VPython. By adding a single statement to an existing VPython program, animations can be viewed in true stereo 3D. One can choose several modes: active shutter glasses, passive polarized glasses, or colored glasses (e.g. red-cyan). The talk will demonstrate the new stereo capability and discuss the pros and cons of various schemes for display of stereo 3D for a large audience. Supported in part by NSF grant DUE-0237132.

  20. A Hybrid Vision-Map Method for Urban Road Detection

    Directory of Open Access Journals (Sweden)

    Carlos Fernández

    2017-01-01

    Full Text Available A hybrid vision-map system is presented to solve the road detection problem in urban scenarios. The standardized use of machine learning techniques in classification problems has been merged with digital navigation map information to increase system robustness. The objective of this paper is to create a new environment perception method to detect the road in urban environments, fusing stereo vision with digital maps by detecting road appearance and road limits such as lane markings or curbs. Deep learning approaches make the system hard-coupled to the training set. Even though our approach is based on machine learning techniques, the features are calculated from different sources (GPS, map, curbs, etc., making our system less dependent on the training set.

  1. Vision-based vehicle detection and tracking algorithm design

    Science.gov (United States)

    Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi

    2009-12-01

    The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.

  2. Processing Earth Observing images with Ames Stereo Pipeline

    Science.gov (United States)

    Beyer, R. A.; Moratto, Z. M.; Alexandrov, O.; Fong, T.; Shean, D. E.; Smith, B. E.

    2013-12-01

    ICESat with its GLAS instrument provided valuable elevation measurements of glaciers. The loss of this spacecraft caused a demand for alternative elevation sources. In response to that, we have improved our Ames Stereo Pipeline (ASP) software (version 2.1+) to ingest satellite imagery from Earth satellite sources in addition to its support of planetary missions. This enables the open source community a free method to generate digital elevation models (DEM) from Digital Globe stereo imagery and alternatively other cameras using RPC camera models. Here we present details of the software. ASP is a collection of utilities written in C++ and Python that implement stereogrammetry. It contains utilities to manipulate DEMs, project imagery, create KML image quad-trees, and perform simplistic 3D rendering. However its primary application is the creation of DEMs. This is achieved by matching every pixel between the images of a stereo observation via a hierarchical coarse-to-fine template matching method. Matched pixels between images represent a single feature that is triangulated using each image's camera model. The collection of triangulated features represents a point cloud that is then grid resampled to create a DEM. In order for ASP to match pixels/features between images, it requires a search range defined in pixel units. Total processing time is proportional to the area of the first image being matched multiplied by the area of the search range. An incorrect search range for ASP causes repeated false positive matches at each level of the image pyramid and causes excessive processing times with no valid DEM output. Therefore our system contains automatic methods for deducing what the correct search range should be. In addition, we provide options for reducing the overall search range by applying affine epipolar rectification, homography transform, or by map projecting against a prior existing low resolution DEM. Depending on the size of the images, parallax, and image

  3. Local spatial frequency analysis for computer vision

    Science.gov (United States)

    Krumm, John; Shafer, Steven A.

    1990-01-01

    A sense of vision is a prerequisite for a robot to function in an unstructured environment. However, real-world scenes contain many interacting phenomena that lead to complex images which are difficult to interpret automatically. Typical computer vision research proceeds by analyzing various effects in isolation (e.g., shading, texture, stereo, defocus), usually on images devoid of realistic complicating factors. This leads to specialized algorithms which fail on real-world images. Part of this failure is due to the dichotomy of useful representations for these phenomena. Some effects are best described in the spatial domain, while others are more naturally expressed in frequency. In order to resolve this dichotomy, we present the combined space/frequency representation which, for each point in an image, shows the spatial frequencies at that point. Within this common representation, we develop a set of simple, natural theories describing phenomena such as texture, shape, aliasing and lens parameters. We show these theories lead to algorithms for shape from texture and for dealiasing image data. The space/frequency representation should be a key aid in untangling the complex interaction of phenomena in images, allowing automatic understanding of real-world scenes.

  4. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review

    Directory of Open Access Journals (Sweden)

    Luis Pérez

    2016-03-01

    Full Text Available In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works.

  5. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review.

    Science.gov (United States)

    Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F

    2016-03-05

    In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works.

  6. Joint depth map and color consistency estimation for stereo images with different illuminations and cameras.

    Science.gov (United States)

    Heo, Yong Seok; Lee, Kyoung Mu; Lee, Sang Uk

    2013-05-01

    Abstract—In this paper, we propose a method that infers both accurate depth maps and color-consistent stereo images for radiometrically varying stereo images. In general, stereo matching and performing color consistency between stereo images are a chicken-and-egg problem since it is not a trivial task to simultaneously achieve both goals. Hence, we have developed an iterative framework in which these two processes can boost each other. First, we transform the input color images to log-chromaticity color space, from which a linear relationship can be established during constructing a joint pdf of transformed left and right color images. From this joint pdf, we can estimate a linear function that relates the corresponding pixels in stereo images. Based on this linear property, we present a new stereo matching cost by combining Mutual Information (MI), SIFT descriptor, and segment-based plane-fitting to robustly find correspondence for stereo image pairs which undergo radiometric variations. Meanwhile, we devise a Stereo Color Histogram Equalization (SCHE) method to produce color-consistent stereo image pairs, which conversely boost the disparity map estimation. Experimental results show that our method produces both accurate depth maps and color-consistent stereo images, even for stereo images with severe radiometric differences.

  7. FPGA Vision Data Architecture

    Science.gov (United States)

    Morfopoulos, Arin C.; Pham, Thang D.

    2013-01-01

    JPL has produced a series of FPGA (field programmable gate array) vision algorithms that were written with custom interfaces to get data in and out of each vision module. Each module has unique requirements on the data interface, and further vision modules are continually being developed, each with their own custom interfaces. Each memory module had also been designed for direct access to memory or to another memory module.

  8. Vision, reanimated and reimagined.

    Science.gov (United States)

    Edelman, Shimon

    2012-01-01

    The publication in 1982 of David Marr's Vision has delivered a singular boost and a course correction to the science of vision. Thirty years later, cognitive science is being transformed by the new ways of thinking about what it is that the brain computes, how it does that, and, most importantly, why cognition requires these computations and not others. This ongoing process still owes much of its impetus and direction to the sound methodology, engaging style, and unique voice of Marr's Vision.

  9. Vision and sketching.

    Science.gov (United States)

    Forbus, Kenneth D

    2012-01-01

    This essay reflects on the revolution David Marr brought about in vision research, and in cognitive science more broadly. I start with an insider's view, then examine the methodological impact of his framework in cognitive science in general. My group's work on sketch understanding descends from Marr's approach to vision, a connection which I make to provide a concrete illustration. I close with a few thoughts about how research in vision and other areas of cognitive science might come together in the future.

  10. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  11. Biomimetic machine vision system.

    Science.gov (United States)

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  12. Cross-orientation masking in human color vision: application of a two-stage model to assess dichoptic and monocular sources of suppression.

    Science.gov (United States)

    Kim, Yeon Jin; Gheiratmand, Mina; Mullen, Kathy T

    2013-05-28

    Cross-orientation masking (XOM) occurs when the detection of a test grating is masked by a superimposed grating at an orthogonal orientation, and is thought to reveal the suppressive effects mediating contrast normalization. Medina and Mullen (2009) reported that XOM was greater for chromatic than achromatic stimuli at equivalent spatial and temporal frequencies. Here we address whether the greater suppression found in binocular color vision originates from a monocular or interocular site, or both. We measure monocular and dichoptic masking functions for red-green color contrast and achromatic contrast at three different spatial frequencies (0.375, 0.75, and 1.5 cpd, 2 Hz). We fit these functions with a modified two-stage masking model (Meese & Baker, 2009) to extract the monocular and interocular weights of suppression. We find that the weight of monocular suppression is significantly higher for color than achromatic contrast, whereas dichoptic suppression is similar for both. These effects are invariant across spatial frequency. We then apply the model to the binocular masking data using the measured values of the monocular and interocular sources of suppression and show that these are sufficient to account for color binocular masking. We conclude that the greater strength of chromatic XOM has a monocular origin that transfers through to the binocular site.

  13. Evaluation of a four month rehabilitation program for stroke patients with balance problems and binocular visual dysfunction

    DEFF Research Database (Denmark)

    Schow, Trine; Harris, Paul; Teasdale, Thomas William

    2016-01-01

    Trine Schow, Paul Harris, Thomas William Teasdale, Morten Arendt Rasmussen. Evaluation of a four month rehabilitation program for stroke patients with balance problems and binocular visual dysfunction. NeuroRehabilitation. 2016 Apr 6;38(4):331-41. doi: 10.3233/NRE-161324.......Trine Schow, Paul Harris, Thomas William Teasdale, Morten Arendt Rasmussen. Evaluation of a four month rehabilitation program for stroke patients with balance problems and binocular visual dysfunction. NeuroRehabilitation. 2016 Apr 6;38(4):331-41. doi: 10.3233/NRE-161324....

  14. Monocular and binocular steady-state flicker VEPs: frequency-response functions to sinusoidal and square-wave luminance modulation.

    Science.gov (United States)

    Nicol, David S; Hamilton, Ruth; Shahani, Uma; McCulloch, Daphne L

    2011-02-01

    Steady-state VEPs to full-field flicker (FFF) using sinusoidally modulated light were compared with those elicited by square-wave modulated light across a wide range of stimulus frequencies with monocular and binocular FFF stimulation. Binocular and monocular VEPs were elicited in 12 adult volunteers to FFF with two modes of temporal modulation: sinusoidal or square-wave (abrupt onset and offset, 50% duty cycle) at ten temporal frequencies ranging from 2.83 to 58.8 Hz. All stimuli had a mean luminance of 100 cd/m(2) with an 80% modulation depth (20-180 cd/m(2)). Response magnitudes at the stimulus frequency (F1) and at the double and triple harmonics (F2 and F3) were compared. For both sinusoidal and square-wave flicker, the FFF-VEP magnitudes at F1 were maximal for 7.52 Hz flicker. F2 was maximal for 5.29 Hz flicker, and F3 magnitudes are largest for flicker stimulation from 3.75 to 7.52 Hz. Square-wave flicker produced significantly larger F1 and F2 magnitudes for slow flicker rates (up to 5.29 Hz for F1; at 2.83 and 3.75 Hz for F2). The F3 magnitudes were larger overall for square-wave flicker. Binocular FFF-VEP magnitudes are larger than those of monocular FFF-VEPs, and the amount of this binocular enhancement is not dependant on the mode of flicker stimulation (mean binocular: monocular ratio 1.41, 95% CI: 1.2-1.6). Binocular enhancement of F1 for 21.3 Hz flicker was increased to a factor of 2.5 (95% CI: 1.8-3.5). In the healthy adult visual system, FFF-VEP magnitudes can be characterized by the frequency-response functions of F1, F2 and F3. Low-frequency roll-off in the FFF-VEP magnitudes is greater for sinusoidal flicker than for square-wave flicker for rates ≤ 5.29 Hz; magnitudes for higher-frequency flicker are similar for the two types of flicker. Binocular FFF-VEPs are larger overall than those recorded monocularly, and this binocular summation is enhanced at 21.3 Hz in the mid-frequency range.

  15. An exploratory study: prolonged periods of binocular stimulation can provide an effective treatment for childhood amblyopia.

    Science.gov (United States)

    Knox, Pamela J; Simmers, Anita J; Gray, Lyle S; Cleary, Marie

    2012-02-21

    The purpose of the present study was to explore the potential for treating childhood amblyopia with a binocular stimulus designed to correlate the visual input from both eyes. Eight strabismic, two anisometropic, and four strabismic and anisometropic amblyopes (mean age, 8.5 ± 2.6 years) undertook a dichoptic perceptual learning task for five sessions (each lasting 1 hour) over the course of a week. The training paradigm involved a simple computer game, which required the subject to use both eyes to perform the task. A statistically significant improvement (t(₁₃) = 5.46; P = 0.0001) in the mean visual acuity (VA) of the amblyopic eye (AE) was demonstrated, from 0.51 ± 0.27 logMAR before training to 0.42 ± 0.28 logMAR after training with six subjects gaining 0.1 logMAR or more of improvement. Measurable stereofunction was established for the first time in three subjects with an overall significant mean improvement in stereoacuity after training (t(₁₃) =2.64; P = 0.02). The dichoptic-based perceptual learning therapy employed in the present study improved both the monocular VA of the AE and stereofunction, verifying the feasibility of a binocular approach in the treatment of childhood amblyopia.

  16. [Improvement of power and illumination source of the indirect binocular ophthalmoscope designed by Foerster].

    Science.gov (United States)

    Leitritz, M A; Oltrup, T; Umesh Babu, H; Bende, T; Bartz-Schmidt, K U; Foerster, M H

    2013-08-01

    Since 1982, the indirect binocular ophthalmoscope designed by Foerster has been in use in ophthalmology. The option to implement a new illumination technique using a light-emitting diode (LED) and a new power source should be evaluated in terms of technical feasibility and patient safety. The cooling element was redesigned to accommodate the new LED electronics and their components, including an option for a variable brightness control. A more compact rechargeable battery was utilized with variable fixation at the headband or elsewhere. Photometric measurements of light intensity and the operating time were planned. Furthermore, a review of the new lighting technology in terms of EN ISO 15004-2 and EN ISO 10943 was necessary. Technical adjustments to accommodate the LED inside the cooling element could be realised. The power source was a modern rechargeable lithium-ion battery with variable fixation. The luminous intensity of the LED is superior to that of the halogen lamp and the operating time was increased to 520 minutes. The required limits according to DIN EN ISO 15004-2 for ophthalmic devices were met by our measurements. The optimisation of the indirect binocular ophthalmoscope brings improvements in illumination intensity and operating time. A conversion for models already in use is possible. A certified appraisal for compliance with the appropriate standards is the next step. Georg Thieme Verlag KG Stuttgart · New York.

  17. Recentering bias for temporal saccades only: Evidence from binocular recordings of eye movements.

    Science.gov (United States)

    Tagu, Jérôme; Doré-Mazars, Karine; Vergne, Judith; Lemoine-Lardennois, Christelle; Vergilino-Perez, Dorine

    2018-01-01

    It is well known that the saccadic system presents multiple asymmetries. Notably, temporal (as opposed to nasal) saccades, centripetal (as opposed to centrifugal) saccades (i.e., the recentering bias) and saccades from the abducting eye (as opposed to the concomitant saccades from the adducting eye) exhibit higher peak velocities. However, these naso-temporal and centripetal-centrifugal asymmetries have always been studied separately. It is thus unknown which asymmetry prevails when there is a conflict between both asymmetries, i.e., in case of centripetal nasal saccades or centrifugal temporal saccades. This study involved binocular recordings of eye movements to examine both the naso-temporal and centripetal-centrifugal asymmetries so as to determine how they work together. Twenty-eight participants had to make saccades toward stimuli presented either centrally or in the periphery in binocular conditions. We found that temporal and abducting saccades always exhibit higher peak velocities than nasal and adducting saccades, irrespective of their centripetal or centrifugal nature. However, we showed that the velocity advantage for centripetal saccades is only found for temporal and not for nasal saccades. Such a result is of importance as it could provide new insights about the physiological origins of the asymmetries found in the saccadic system.

  18. Parts-based stereoscopic image assessment by learning binocular manifold color visual properties

    Science.gov (United States)

    Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi

    2016-11-01

    Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.

  19. Binocular rivalry and multi-stable perception: independence and monocular channels.

    Science.gov (United States)

    Quinn, Helen; Arnold, Derek H

    2010-08-12

    When discrepant images are shown to the two eyes, each can intermittently disappear. This is known as binocular rivalry (BR). The causes of BR are debated. One view is that BR is driven by a low-level visual process, characterized by competition between monocular channels. Another is that BR is driven by higher level processes involved in interpreting ambiguous input. This would link BR to other phenomena, wherein perception changes without input changes. We reasoned that if this were true, the timing of BR changes might be related to the timing of changes in other multi-stable stimuli. We tested this using combinations of simple (orthogonal gratings) and complex (pictures of houses and faces) stimuli. We also presented simple stimuli in conjunction with a stimulus that induced an ambiguous direction of rotation. We found that the timing of simple BR changes was unrelated to the timing of either complex BR changes or to direction changes within an ambiguous rotation. However, the timings of changes within proximate BR stimuli, both simple and complex, were related, but only when similar images were encoded in the same monocular channels. These observations emphasize the importance of monocular channel interactions in determining the timing of binocular rivalry changes.

  20. Impact of the severity of distance and near-vision impairment on depression and vision-specific quality of life in older people living in residential care.

    Science.gov (United States)

    Lamoureux, Ecosse L; Fenwick, Eva; Moore, Kirsten; Klaic, Marlena; Borschmann, Karen; Hill, Keith

    2009-09-01

    To determine the relationship between the severity of distance and near-vision impairment on vision-specific quality of life (QoL) and depression in residential care residents. Residents from three low-level residential care facilities in Victoria (Australia) were recruited. All participants were assessed for cognitive impairment, distance and near-vision impairment (VI), and depression. Sociodemographic and other clinical data were also collected. The subscales of the Nursing Home Vision-Targeted Health-Related Quality-of-Life questionnaire (NHVQoL) were the main outcome measures and were validated by Rasch Analysis. Seventy-six residents were enrolled. The mean +/- SD of the participants' age was 83.9 +/- 9.9 years, and most were women (n = 44; 60%); 46.4% (n = 35) had binocular presenting VI (worse than N8); 16% (n = 14) recorded depression symptoms, although depression was not associated with VI (P > 0.05). In linear regression models, distance and near VI was independently associated with poorer QoL on seven of the eight subscales of the NHVQoL scale (P vision loss had poorer QoL, ranging between 12 and 80 points (scale range: 0-100) than did those with no VI. The QoL aspects most affected by vision loss were related to general vision, reading, hobbies, emotional well-being, and social interaction. VI remains a major form of disability in individuals living in residential care facilities and affects vision-specific functioning and socioemotional aspects of daily living. A larger study is needed to confirm these findings.

  1. Field study of sound exposure by personal stereo

    DEFF Research Database (Denmark)

    Ordoñez, Rodrigo Pizarro; Reuter, Karen; Hammershøi, Dorte

    2006-01-01

    A number of large scale studies suggest that the exposure level used with personal stereo systems should raise concern. High levels can be produced by most commercially available mp3 players, and they are generally used in high background noise levels (i.e., while in a bus or rain). A field study...... of habitual use, estimation of listening levels and exposure levels, and assessment of their state of hearing, by either threshold determination or OAE measurement, with a special view to the general validity of the results (uncertainty factors and their magnitude)....... on young people's habitual sound exposure to personal stereos has been carried out using a measurement method according to principles of ISO 11904-2:2004. Additionally the state of their hearing has also been assessed. This presentation deals with the methodological aspects relating to the quantification...

  2. How the Venetian Blind Percept Emergesfrom the Laminar Cortical Dynamics of 3D Vision

    Directory of Open Access Journals (Sweden)

    Stephen eGrossberg

    2014-08-01

    Full Text Available The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model shows how identified neurons that interact in hierarchically organized laminar circuits of the visual cortex can simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. The model describes how monocular and binocular oriented filtering interacts with later stages of 3D boundary formation and surface filling-in in the lateral geniculate nucleus (LGN and cortical areas V1, V2, and V4. It proposes how interactions between layers 4, 3B, and 2/3 in V1 and V2 contribute to stereopsis, and how binocular and monocular information combine to form 3D boundary and surface representations. The model suggests how surface-to-boundary feedback from V2 thin stripes to pale stripes enables computationally complementary boundary and surface formation properties to generate a single consistent percept, eliminate redundant 3D boundaries, and trigger figure-ground perception. The model also shows how false binocular boundary matches may be eliminated by Gestalt grouping properties. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity

  3. The STEREO Mission: A New Approach to Space Weather Research

    Science.gov (United States)

    Kaiser, michael L.

    2006-01-01

    With the launch of the twin STEREO spacecraft in July 2006, a new capability will exist for both real-time space weather predictions and for advances in space weather research. Whereas previous spacecraft monitors of the sun such as ACE and SOH0 have been essentially on the sun-Earth line, the STEREO spacecraft will be in 1 AU orbits around the sun on either side of Earth and will be viewing the solar activity from distinctly different vantage points. As seen from the sun, the two spacecraft will separate at a rate of 45 degrees per year, with Earth bisecting the angle. The instrument complement on the two spacecraft will consist of a package of optical instruments capable of imaging the sun in the visible and ultraviolet from essentially the surface to 1 AU and beyond, a radio burst receiver capable of tracking solar eruptive events from an altitude of 2-3 Rs to 1 AU, and a comprehensive set of fields and particles instruments capable of measuring in situ solar events such as interplanetary magnetic clouds. In addition to normal daily recorded data transmissions, each spacecraft is equipped with a real-time beacon that will provide 1 to 5 minute snapshots or averages of the data from the various instruments. This beacon data will be received by NOAA and NASA tracking stations and then relayed to the STEREO Science Center located at Goddard Space Flight Center in Maryland where the data will be processed and made available within a goal of 5 minutes of receipt on the ground. With STEREO's instrumentation and unique view geometry, we believe considerable improvement can be made in space weather prediction capability as well as improved understanding of the three dimensional structure of solar transient events.

  4. Lossless Compression of Stereo Disparity Maps for 3D

    DEFF Research Database (Denmark)

    Zamarin, Marco; Forchhammer, Søren

    2012-01-01

    . The coding algorithm is based on bit-plane coding, disparity prediction via disparity warping and context-based arithmetic coding exploiting predicted disparity data. Experimental results show that the proposed compression scheme achieves average compression factors of about 48:1 for high resolution...... disparity maps for stereo pairs and outperforms different standard solutions for lossless still image compression. Moreover, it provides a progressive representation of disparity data as well as a parallelizable structure....

  5. Cellular neural networks for the stereo matching problem

    International Nuclear Information System (INIS)

    Taraglio, S.; Zanela, A.

    1997-03-01

    The applicability of the Cellular Neural Network (CNN) paradigm to the problem of recovering information on the tridimensional structure of the environment is investigated. The approach proposed is the stereo matching of video images. The starting point of this work is the Zhou-Chellappa neural network implementation for the same problem. The CNN based system we present here yields the same results as the previous approach, but without the many existing drawbacks

  6. A flexible calibration method for laser displacement sensors based on a stereo-target

    International Nuclear Information System (INIS)

    Zhang, Jie; Sun, Junhua; Liu, Zhen; Zhang, Guangjun

    2014-01-01

    Laser displacement sensors (LDSs) are widely used in online measurement owing to their characteristics of non-contact, high measurement speed, etc. However, existing calibration methods for LDSs based on the traditional triangulation measurement model are time-consuming and tedious to operate. In this paper, a calibration method for LDSs based on a vision measurement model of the LDS is presented. According to the constraint relationships of the model parameters, the calibration is implemented by freely moving a stereo-target at least twice in the field of view of the LDS. Both simulation analyses and real experiments were conducted. Experimental results demonstrate that the calibration method achieves an accuracy of 0.044 mm within the measurement range of about 150 mm. Compared to traditional calibration methods, the proposed method has no special limitation on the relative position of the LDS and the target. The linearity approximation of the measurement model in the calibration is not needed, and thus the measurement range is not limited in the linearity range. It is easy and quick to implement the calibration for the LDS. The method can be applied in wider fields. (paper)

  7. Jane Addams’ Social Vision

    DEFF Research Database (Denmark)

    Villadsen, Kaspar

    2018-01-01

    resonated with key tenets of social gospel theology, which imbued her texts with an overarching vision of humanity’s progressive history. It is suggested that Addams’ vision of a major transition in industrial society, one involving a BChristian renaissance^ and individuals’ transformation into Bsocialized...

  8. Copenhagen Energy Vision

    DEFF Research Database (Denmark)

    Mathiesen, Brian Vad; Lund, Rasmus Søgaard; Connolly, David

    The short-term goal for The City of Copenhagen is a CO2 neutral energy supply by the year 2025, and the long-term vision for Denmark is a 100% renewable energy (RE) supply by the year 2050. In this project, it is concluded that Copenhagen plays a key role in this transition. The long-term vision...

  9. Multispectral imaging using a stereo camera: concept, design and assessment

    Directory of Open Access Journals (Sweden)

    Mansouri Alamin

    2011-01-01

    Full Text Available Abstract This paper proposes a one-shot six-channel multispectral color image acquisition system using a stereo camera and a pair of optical filters. The two filters from the best pair, selected from among readily available filters such that they modify the sensitivities of the two cameras in such a way that they produce optimal estimation of spectral reflectance and/or color, are placed in front of the two lenses of the stereo camera. The two images acquired from the stereo camera are then registered for pixel-to-pixel correspondence. The spectral reflectance and/or color at each pixel on the scene are estimated from the corresponding camera outputs in the two images. Both simulations and experiments have shown that the proposed system performs well both spectrally and colorimetrically. Since it acquires the multispectral images in one shot, the proposed system can solve the limitations of slow and complex acquisition process, and costliness of the state of the art multispectral imaging systems, leading to its possible uses in widespread applications.

  10. Discriminability limits in spatio-temporal stereo block matching.

    Science.gov (United States)

    Jain, Ankit K; Nguyen, Truong Q

    2014-05-01

    Disparity estimation is a fundamental task in stereo imaging and is a well-studied problem. Recently, methods have been adapted to the video domain where motion is used as a matching criterion to help disambiguate spatially similar candidates. In this paper, we analyze the validity of the underlying assumptions of spatio-temporal disparity estimation, and determine the extent to which motion aids the matching process. By analyzing the error signal for spatio-temporal block matching under the sum of squared differences criterion and treating motion as a stochastic process, we determine the probability of a false match as a function of image features, motion distribution, image noise, and number of frames in the spatio-temporal patch. This performance quantification provides insight into when spatio-temporal matching is most beneficial in terms of the scene and motion, and can be used as a guide to select parameters for stereo matching algorithms. We validate our results through simulation and experiments on stereo video.

  11. Time for a Change; Spirit's View on Sol 1843 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11973 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11973 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, full-circle view of the rover's surroundings during the 1,843rd Martian day, or sol, of Spirit's surface mission (March 10, 2009). South is in the middle. North is at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 36 centimeters downhill earlier on Sol 1854, but had not been able to get free of ruts in soft material that had become an obstacle to getting around the northeastern corner of the low plateau called 'Home Plate.' The Sol 1854 drive, following two others in the preceding four sols that also achieved little progress in the soft ground, prompted the rover team to switch to a plan of getting around Home Plate counterclockwise, instead of clockwise. The drive direction in subsequent sols was westward past the northern edge of Home Plate. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  12. View Ahead After Spirit's Sol 1861 Drive (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11977 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11977 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images combined into this stereo, 210-degree view of the rover's surroundings during the 1,861st to 1,863rd Martian days, or sols, of Spirit's surface mission (March 28 to 30, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The center of the scene is toward the south-southwest. East is on the left. West-northwest is on the right. The rover had driven 22.7 meters (74 feet) southwestward on Sol 1861 before beginning to take the frames in this view. The drive brought Spirit past the northwestern corner of Home Plate. In this view, the western edge of Home Plate is on the portion of the horizon farthest to the left. A mound in middle distance near the center of the view is called 'Tsiolkovsky' and is about 40 meters (about 130 feet) from the rover's position. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  13. Dust Devil in Spirit's View Ahead on Sol 1854 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11960 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11960 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,854th Martian day, or sol, of Spirit's surface mission (March 21, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 13.79 meters (45 feet) westward earlier on Sol 1854. West is at the center, where a dust devil is visible in the distance. North on the right, where Husband Hill dominates the horizon; Spirit was on top of Husband Hill in September and October 2005. South is on the left, where lighter-toned rock lines the edge of the low plateau called 'Home Plate.' This view is presented as a cylindrical-perspective projection with geometric seam correction.

  14. Sensitivity Monitoring of the SECCHI COR1 Telescopes on STEREO

    Science.gov (United States)

    Thompson, William T.

    2018-03-01

    Measurements of bright stars passing through the fields of view of the inner coronagraphs (COR1) on board the Solar Terrestrial Relations Observatory (STEREO) are used to monitor changes in the radiometric calibration over the course of the mission. Annual decline rates are found to be 0.648 ± 0.066%/year for COR1-A on STEREO Ahead and 0.258 ± 0.060%/year for COR1-B on STEREO Behind. These rates are consistent with decline rates found for other space-based coronagraphs in similar radiation environments. The theorized cause for the decline in sensitivity is darkening of the lenses and other optical elements due to exposure to high-energy solar particles and photons, although other causes are also possible. The total decline in the COR-B sensitivity when contact with Behind was lost on 1 October 2014 was 1.7%, while COR1-A was down by 4.4%. As of 1 November 2017, the COR1-A decline is estimated to be 6.4%. The SECCHI calibration routines will be updated to take these COR1 decline rates into account.

  15. Parallel Computer System for 3D Visualization Stereo on GPU

    Science.gov (United States)

    Al-Oraiqat, Anas M.; Zori, Sergii A.

    2018-03-01

    This paper proposes the organization of a parallel computer system based on Graphic Processors Unit (GPU) for 3D stereo image synthesis. The development is based on the modified ray tracing method developed by the authors for fast search of tracing rays intersections with scene objects. The system allows significant increase in the productivity for the 3D stereo synthesis of photorealistic quality. The generalized procedure of 3D stereo image synthesis on the Graphics Processing Unit/Graphics Processing Clusters (GPU/GPC) is proposed. The efficiency of the proposed solutions by GPU implementation is compared with single-threaded and multithreaded implementations on the CPU. The achieved average acceleration in multi-thread implementation on the test GPU and CPU is about 7.5 and 1.6 times, respectively. Studying the influence of choosing the size and configuration of the computational Compute Unified Device Archi-tecture (CUDA) network on the computational speed shows the importance of their correct selection. The obtained experimental estimations can be significantly improved by new GPUs with a large number of processing cores and multiprocessors, as well as optimized configuration of the computing CUDA network.

  16. Surface Organization Influences Bistable Vision

    Science.gov (United States)

    Graf, Erich W.; Adams, Wendy J.

    2008-01-01

    A priority for the visual system is to construct 3-dimensional surfaces from visual primitives. Information is combined across individual cues to form a robust representation of the external world. Here, it is shown that surface completion relying on multiple visual cues influences relative dominance during binocular rivalry. The shape of a…

  17. Machine Vision Handbook

    CERN Document Server

    2012-01-01

    The automation of visual inspection is becoming more and more important in modern industry as a consistent, reliable means of judging the quality of raw materials and manufactured goods . The Machine Vision Handbook  equips the reader with the practical details required to engineer integrated mechanical-optical-electronic-software systems. Machine vision is first set in the context of basic information on light, natural vision, colour sensing and optics. The physical apparatus required for mechanized image capture – lenses, cameras, scanners and light sources – are discussed followed by detailed treatment of various image-processing methods including an introduction to the QT image processing system. QT is unique to this book, and provides an example of a practical machine vision system along with extensive libraries of useful commands, functions and images which can be implemented by the reader. The main text of the book is completed by studies of a wide variety of applications of machine vision in insp...

  18. Light Vision Color

    Science.gov (United States)

    Valberg, Arne

    2005-04-01

    Light Vision Color takes a well-balanced, interdisciplinary approach to our most important sensory system. The book successfully combines basics in vision sciences with recent developments from different areas such as neuroscience, biophysics, sensory psychology and philosophy. Originally published in 1998 this edition has been extensively revised and updated to include new chapters on clinical problems and eye diseases, low vision rehabilitation and the basic molecular biology and genetics of colour vision. Takes a broad interdisciplinary approach combining basics in vision sciences with the most recent developments in the area Includes an extensive list of technical terms and explanations to encourage student understanding Successfully brings together the most important areas of the subject in to one volume

  19. Design and control of active vision based mechanisms for intelligent robots

    Science.gov (United States)

    Wu, Liwei; Marefat, Michael M.

    1994-01-01

    In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.

  20. Binocular reflexes in the first 6 months of life: preliminary results of a study of normal infants.

    Science.gov (United States)

    Coakes, R L; Clothier, C; Wilson, A

    1979-01-01

    The development of the binocular reflexes during the first 6 months of life was studied in 38 normal infants. Preliminary results indicate that the following reflex is well established by 2 months, convergence by 3 months and the corrective fusion reflex by the age of 5 months.

  1. Vessels 6-DOF poses measurement based on key points tracking via binocular camera

    Science.gov (United States)

    Ji, Zhengnan; Tao, Limin; Cui, Wei; Lv, Wei

    2017-07-01

    Offshore accurate replenishment technology is the foundation for conducting ocean research for every country. However, it is difficult to keep the security and accuracy of hoisting due to the fact that waves will let vessels generate 6-DOF motions. This paper regards accurate offshore supplying as a background, and takes into consideration that vessel acts as a rigid body to perform algorithm research on the AHC (Active Heaving Compensation) detection system. The binocular camera installed on the hoisting equipment can calculate the 6-DOF pose of vessel via detecting landmarks on the deck. The system can achieve all-weather operations, adopting Shi-Tomasi algorithm to identify and L-K Optical Flow algorithm to track sub-pixel points. Lastly, the scheme has been verified in the 6-DOF motion platform, which indicates that its accuracy meets the requirements of the control experiment in the next step.

  2. Visions of the City

    DEFF Research Database (Denmark)

    Pinder, David

    Visions of the City is a dramatic account of utopian urbanism in the twentieth century. It explores radical demands for new spaces and ways of living, and considers their effects on planning, architecture and struggles to shape urban landscapes. Such visions, it shows, have played a crucial role...... to transform urban space and everyday life. He addresses in particular Constant's vision of New Babylon, finding within his proposals for future spaces produced through nomadic life, creativity and play a still powerful challenge to imagine cities otherwise. The book not only recovers vital moments from past...

  3. Anchoring visions in organizations

    DEFF Research Database (Denmark)

    Simonsen, Jesper

    1999-01-01

    This paper introduces the term 'anchoring' within systems development: Visions, developed through early systems design within an organization, need to be deeply rooted in the organization. A vision's rationale needs to be understood by those who decide if the vision should be implemented as well...... as by those involved in the actual implementation. A model depicting a recent trend within systems development is presented: Organizations rely on purchasing generic software products and/or software development outsourced to external contractors. A contemporary method for participatory design, where...

  4. A Robust Vision Module for Humanoid Robotic Ping-Pong Game

    Directory of Open Access Journals (Sweden)

    Xiaopeng Chen

    2015-04-01

    Full Text Available Developing a vision module for a humanoid ping-pong game is challenging due to the spin and the non-linear rebound of the ping-pong ball. In this paper, we present a robust predictive vision module to overcome these problems. The hardware of the vision module is composed of two stereo camera pairs with each pair detecting the 3D positions of the ball on one half of the ping-pong table. The software of the vision module divides the trajectory of the ball into four parts and uses the perceived trajectory in the first part to predict the other parts. In particular, the software of the vision module uses an aerodynamic model to predict the trajectories of the ball in the air and uses a novel non-linear rebound model to predict the change of the ball's motion during rebound. The average prediction error of our vision module at the ball returning point is less than 50 mm - a value small enough for standard sized ping-pong rackets. Its average processing speed is 120fps. The precision and efficiency of our vision module enables two humanoid robots to play ping-pong continuously for more than 200 rounds.

  5. Distribution of ametropia in 1 170 preschool children with low vision

    Directory of Open Access Journals (Sweden)

    Li-Li Sun

    2016-03-01

    Full Text Available AIM:To observe and study the distribution of ametropia in 1 170 preschool children with low vision.METHODS:Ten kindergartens in the urban area of Jinzhou were randomly selected. For the preschool children aged from 3 to 6, the vision conditions including sight test, ocular inspection, refraction status, conventional ophthalmic testing and stereo tests were conducted. The children with visionRESULTS:(1rates of abnormal vision were 6.37% in children aged 3,7.79% in those aged 4,15.24% in those aged 5 and 8.93% in those aged 6; abnormal rate in children aged 5 was significantly higher than those in the other age groups(PCONCLUSION:For preschool children with low vision, the abnormal rate is reduced with the increasing ages since the vision is gradually mature. Based on the results, hyperopia is the main causes for low vision in preschool children(aged from 3 to 6. Based on the factors of myopia, strabismus and amblyopia, it is important to conduct the general survey in curing eye diseases as early as possible.

  6. Reinforcement of perceptual inference: reward and punishment alter conscious visual perception during binocular rivalry

    Directory of Open Access Journals (Sweden)

    Gregor eWilbertz

    2014-12-01

    Full Text Available Perception is an inferential process, which becomes immediately evident when sensory information is conflicting or ambiguous and thus allows for more than one perceptual interpretation. Thinking the idea of perception as inference through to the end results in a blurring of boundaries between perception and action selection, as perceptual inference implies the construction of a percept as an active process. Here we therefore wondered whether perception shares a key characteristic of action selection, namely that it is shaped by reinforcement learning. In two behavioral experiments, we used binocular rivalry to examine whether perceptual inference can be influenced by the association of perceptual outcomes with reward or punishment, respectively, in analogy to instrumental conditioning. Binocular rivalry was evoked by two orthogonal grating stimuli presented to the two eyes, resulting in perceptual alternations between the two gratings. Perception was tracked indirectly and objectively through a target detection task, which allowed us to preclude potential reporting biases. Monetary rewards or punishments were given repeatedly during perception of only one of the two rivalling stimuli. We found an increase in dominance durations for the percept associated with reward, relative to the non-rewarded percept. In contrast, punishment led to an increase of the non-punished compared to a relative decrease of the punished percept. Our results show that perception shares key characteristics with action selection, in that it is influenced by reward and punishment in opposite directions, thus narrowing the gap between the conceptually separated domains of perception and action selection. We conclude that perceptual inference is an adaptive process that is shaped by its consequences.

  7. Dense GPU-enhanced surface reconstruction from stereo endoscopic images for intraoperative registration.

    Science.gov (United States)

    Rohl, Sebastian; Bodenstedt, Sebastian; Suwelack, Stefan; Dillmann, Rudiger; Speidel, Stefanie; Kenngott, Hannes; Muller-Stich, Beat P

    2012-03-01

    In laparoscopic surgery, soft tissue deformations substantially change the surgical site, thus impeding the use of preoperative planning during intraoperative navigation. Extracting depth information from endoscopic images and building a surface model of the surgical field-of-view is one way to represent this constantly deforming environment. The information can then be used for intraoperative registration. Stereo reconstruction is a typical problem within computer vision. However, most of the available methods do not fulfill the specific requirements in a minimally invasive setting such as the need of real-time performance, the problem of view-dependent specular reflections and large curved areas with partly homogeneous or periodic textures and occlusions. In this paper, the authors present an approach toward intraoperative surface reconstruction based on stereo endoscopic images. The authors describe our answer to this problem through correspondence analysis, disparity correction and refinement, 3D reconstruction, point cloud smoothing and meshing. Real-time performance is achieved by implementing the algorithms on the gpu. The authors also present a new hybrid cpu-gpu algorithm that unifies the advantages of the cpu and the gpu version. In a comprehensive evaluation using in vivo data, in silico data from the literature and virtual data from a newly developed simulation environment, the cpu, the gpu, and the hybrid cpu-gpu versions of the surface reconstruction are compared to a cpu and a gpu algorithm from the literature. The recommended approach toward intraoperative surface reconstruction can be conducted in real-time depending on the image resolution (20 fps for the gpu and 14fps for the hybrid cpu-gpu version on resolution of 640 × 480). It is robust to homogeneous regions without texture, large image changes, noise or errors from camera calibration, and it reconstructs the surface down to sub millimeter accuracy. In all the experiments within the

  8. Robotic vision system for random bin picking with dual-arm robots

    Directory of Open Access Journals (Sweden)

    Kang Sangseung

    2016-01-01

    Full Text Available Random bin picking is one of the most challenging industrial robotics applications available. It constitutes a complicated interaction between the vision system, robot, and control system. For a packaging operation requiring a pick-and-place task, the robot system utilized should be able to perform certain functions for recognizing the applicable target object from randomized objects in a bin. In this paper, we introduce a robotic vision system for bin picking using industrial dual-arm robots. The proposed system recognizes the best object from randomized target candidates based on stereo vision, and estimates the position and orientation of the object. It then sends the result to the robot control system. The system was developed for use in the packaging process of cell phone accessories using dual-arm robots.

  9. Eyeglasses for Vision Correction

    Science.gov (United States)

    ... light. Another option for vision correction with UV protection is prescription sunglasses . Also, for people who prefer one set of eyeglasses for both inside and outdoors, photochromatic lenses are ...

  10. Your Child's Vision

    Science.gov (United States)

    ... rubbing extreme light sensitivity poor focusing poor visual tracking (following an object) abnormal alignment or movement of ... MD Date reviewed: June 2014 More on this topic for: Parents Kids Teens Amblyopia Can Vision Problems ...

  11. Vision Loss, Sudden

    Science.gov (United States)

    ... the nerves that carry visual signals from the eye to the brain (the optic nerve and the visual pathways) Light ... of nerve impulses from the back of the eye to the brain will interfere with vision. Legal blindness is defined ...

  12. Performance of Correspondence Algorithms in Vision-Based Driver Assistance Using an Online Image Sequence Database

    DEFF Research Database (Denmark)

    Klette, Reinhard; Krüger, Norbert; Vaudrey, Tobi

    2011-01-01

    the classification of recorded video data into situations defined by a cooccurrence of some events in recorded traffic scenes. About 100-400 stereo frames (or 4-16 s of recording) are considered a basic sequence, which will be identified with one particular situation. Future testing is expected to be on data......This paper discusses options for testing correspondence algorithms in stereo or motion analysis that are designed or considered for vision-based driver assistance. It introduces a globally available database, with a main focus on testing on video sequences of real-world data. We suggest...... that report on hours of driving, and multiple hours of long video data may be segmented into basic sequences and classified into situations. This paper prepares for this expected development. This paper uses three different evaluation approaches (prediction error, synthesized sequences, and labeled sequences...

  13. What Drives Bird Vision? Bill Control and Predator Detection Overshadow Flight

    Directory of Open Access Journals (Sweden)

    Graham R. Martin

    2017-11-01

    Full Text Available Although flight is regarded as a key behavior of birds this review argues that the perceptual demands for its control are met within constraints set by the perceptual demands of two other key tasks: the control of bill (or feet position, and the detection of food items/predators. Control of bill position, or of the feet when used in foraging, and timing of their arrival at a target, are based upon information derived from the optic flow-field in the binocular region that encompasses the bill. Flow-fields use information extracted from close to the bird using vision of relatively low spatial resolution. The detection of food items and predators is based upon information detected at a greater distance and depends upon regions in the retina with relatively high spatial resolution. The tasks of detecting predators and of placing the bill (or feet accurately, make contradictory demands upon vision and these have resulted in trade-offs in the form of visual fields and in the topography of retinal regions in which spatial resolution is enhanced, indicated by foveas, areas, and high ganglion cell densities. The informational function of binocular vision in birds does not lie in binocularity per se (i.e., two eyes receiving slightly different information simultaneously about the same objects but in the contralateral projection of the visual field of each eye. This ensures that each eye receives information from a symmetrically expanding optic flow-field centered close to the direction of the bill, and from this the crucial information of direction of travel and time-to-contact can be extracted, almost instantaneously. Interspecific comparisons of visual fields between closely related species have shown that small differences in foraging techniques can give rise to different perceptual challenges and these have resulted in differences in visual fields even within the same genus. This suggests that vision is subject to continuing and relatively rapid

  14. What Drives Bird Vision? Bill Control and Predator Detection Overshadow Flight.

    Science.gov (United States)

    Martin, Graham R

    2017-01-01

    Although flight is regarded as a key behavior of birds this review argues that the perceptual demands for its control are met within constraints set by the perceptual demands of two other key tasks: the control of bill (or feet) position, and the detection of food items/predators. Control of bill position, or of the feet when used in foraging, and timing of their arrival at a target, are based upon information derived from the optic flow-field in the binocular region that encompasses the bill. Flow-fields use information extracted from close to the bird using vision of relatively low spatial resolution. The detection of food items and predators is based upon information detected at a greater distance and depends upon regions in the retina with relatively high spatial resolution. The tasks of detecting predators and of placing the bill (or feet) accurately, make contradictory demands upon vision and these have resulted in trade-offs in the form of visual fields and in the topography of retinal regions in which spatial resolution is enhanced, indicated by foveas, areas, and high ganglion cell densities. The informational function of binocular vision in birds does not lie in binocularity per se (i.e., two eyes receiving slightly different information simultaneously about the same objects) but in the contralateral projection of the visual field of each eye. This ensures that each eye receives information from a symmetrically expanding optic flow-field centered close to the direction of the bill, and from this the crucial information of direction of travel and time-to-contact can be extracted, almost instantaneously. Interspecific comparisons of visual fields between closely related species have shown that small differences in foraging techniques can give rise to different perceptual challenges and these have resulted in differences in visual fields even within the same genus. This suggests that vision is subject to continuing and relatively rapid natural selection

  15. New Record Five-Wheel Drive, Spirit's Sol 1856 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11962 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11962 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,856th Martian day, or sol, of Spirit's surface mission (March 23, 2009). The center of the view is toward the west-southwest. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 25.82 meters (84.7 feet) west-northwestward earlier on Sol 1856. This is the longest drive on Mars so far by a rover using only five wheels. Spirit lost the use of its right-front wheel in March 2006. Before Sol 1856, the farthest Spirit had covered in a single sol's five-wheel drive was 24.83 meters (81.5 feet), on Sol 1363 (Nov. 3, 2007). The Sol 1856 drive made progress on a route planned for taking Spirit around the western side of the low plateau called 'Home Plate.' A portion of the northwestern edge of Home Plate is prominent in the left quarter of this image, toward the south. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  16. Spirit's View Beside 'Home Plate' on Sol 1823 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11971 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11971 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,823rd Martian day, or sol, of Spirit's surface mission (Feb. 17, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The center of the view is toward the south-southwest. The rover had driven 7 meters (23 feet) eastward earlier on Sol 1823, part of maneuvering to get Spirit into a favorable position for climbing onto the low plateau called 'Home Plate.' However, after two driving attempts with negligible progress during the following three sols, the rover team changed its strategy for getting to destinations south of Home Plate. The team decided to drive Spirit at least partway around Home Plate, instead of ascending the northern edge and taking a shorter route across the top of the plateau. Layered rocks forming part of the northern edge of Home Plate can be seen near the center of the image. Rover wheel tracks are visible at the lower edge. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  17. Opportunity's View After Drive on Sol 1806 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11816 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11816 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 60.86 meters (200 feet) on the 1,806th Martian day, or sol, of Opportunity's surface mission (Feb. 21, 2009). North is at the center; south at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). Engineers designed the Sol 1806 drive to be driven backwards as a strategy to redistribute lubricant in the rovers wheels. The right-front wheel had been showing signs of increased friction. The rover's position after the Sol 1806 drive was about 2 kilometer (1.2 miles) south southwest of Victoria Crater. Cumulative odometry was 14.74 kilometers (9.16 miles) since landing in January 2004, including 2.96 kilometers (1.84 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008). This view is presented as a cylindrical-perspective projection with geometric seam correction.

  18. Opportunity's View After Long Drive on Sol 1770 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11791 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11791 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 104 meters (341 feet) on the 1,770th Martian day, or sol, of Opportunity's surface mission (January 15, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). Prior to the Sol 1770 drive, Opportunity had driven less than a meter since Sol 1713 (November 17, 2008), while it used the tools on its robotic arm first to examine a meteorite called 'Santorini' during weeks of restricted communication while the sun was nearly in line between Mars and Earth, then to examine bedrock and soil targets near Santorini. The rover's position after the Sol 1770 drive was about 1.1 kilometer (two-thirds of a mile) south southwest of Victoria Crater. Cumulative odometry was 13.72 kilometers (8.53 miles) since landing in January 2004, including 1.94 kilometers (1.21 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008). This view is presented as a cylindrical-perspective projection with geometric seam correction.

  19. Stereo optical guidance system for control of industrial robots

    Science.gov (United States)

    Powell, Bradley W. (Inventor); Rodgers, Mike H. (Inventor)

    1992-01-01

    A device for the generation of basic electrical signals which are supplied to a computerized processing complex for the operation of industrial robots. The system includes a stereo mirror arrangement for the projection of views from opposite sides of a visible indicia formed on a workpiece. The views are projected onto independent halves of the retina of a single camera. The camera retina is of the CCD (charge-coupled-device) type and is therefore capable of providing signals in response to the image projected thereupon. These signals are then processed for control of industrial robots or similar devices.

  20. Review of literature on hearing damage by personal stereo

    DEFF Research Database (Denmark)

    Hammershøi, Dorte; Ordoñez, Rodrigo Pizarro

    2006-01-01

    of ISO 11904-1:2002 and 11904-2:2004, previous studies can be viewed in a different light, and the results point, in our opinion, at levels and listening habits that are of hazard to the hearing. The present paper will review previous studies that may shed light over the levels and habits of contemporary......In the 1980s and 1990s there was a general concern for the high levels that personal stereo systems were capable of producing. At that time no standardized method for the determination of exposure levels existed, which could have contributed to overly conservative conclusions. With the publication...