WorldWideScience

Sample records for monocular 3d vision

  1. 3D display system using monocular multiview displays

    Science.gov (United States)

    Sakamoto, Kunio; Saruta, Kazuki; Takeda, Kazutoki

    2002-05-01

    A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have researched the virtual-reality systems connected with computer networks for real-time remote control and developed a low-priced real-time 3D display for building these systems. We developed a 3D HMD system using monocular multi-view displays. The 3D displaying technique of this monocular multi-view display is based on the concept of the super multi-view proposed by Kajiki at TAO (Telecommunications Advancement Organization of Japan) in 1996. Our 3D HMD has two monocular multi-view displays (used as a visual display unit) in order to display a picture to the left eye and the right eye. The left and right images are a pair of stereoscopic images for the left and right eyes, then stereoscopic 3D images are observed.

  2. Relating binocular and monocular vision in strabismic and anisometropic amblyopia.

    Science.gov (United States)

    Agrawal, Ritwick; Conner, Ian P; Odom, J V; Schwartz, Terry L; Mendola, Janine D

    2006-06-01

    To examine deficits in monocular and binocular vision in adults with amblyopia and to test the following 2 hypotheses: (1) Regardless of clinical subtype, the degree of impairment in binocular integration predicts the pattern of monocular acuity deficits. (2) Subjects who lack binocular integration exhibit the most severe interocular suppression. Seven subjects with anisometropia, 6 subjects with strabismus, and 7 control subjects were tested. Monocular tests included Snellen acuity, grating acuity, Vernier acuity, and contrast sensitivity. Binocular tests included Titmus stereo test, binocular motion integration, and dichoptic contrast masking. As expected, both groups showed deficits in monocular acuity, with subjects with strabismus showing greater deficits in Vernier acuity. Both amblyopic groups were then characterized according to the degree of residual stereoacuity and binocular motion integration ability, and 67% of subjects with strabismus compared with 29% of subjects with anisometropia were classified as having "nonbinocular" vision according to our criterion. For this nonbinocular group, Vernier acuity is most impaired. In addition, the nonbinocular group showed the most dichoptic contrast masking of the amblyopic eye and the least dichoptic contrast masking of the fellow eye. The degree of residual binocularity and interocular suppression predicts monocular acuity and may be a significant etiological mechanism of vision loss.

  3. Real-Time 3D Motion capture by monocular vision and virtual rendering

    OpenAIRE

    Gomez Jauregui , David Antonio; Horain , Patrick

    2012-01-01

    International audience; Avatars in networked 3D virtual environments allow users to interact over the Internet and to get some feeling of virtual telepresence. However, avatar control may be tedious. Motion capture systems based on 3D sensors have recently reached the consumer market, but webcams and camera-phones are more widespread and cheaper. The proposed demonstration aims at animating a user's avatar from real time 3D motion capture by monoscopic computer vision, thus allowing virtual t...

  4. Disseminated neurocysticercosis presenting as isolated acute monocular painless vision loss

    Directory of Open Access Journals (Sweden)

    Gaurav M Kasundra

    2014-01-01

    Full Text Available Neurocysticercosis, the most common parasitic infection of the nervous system, is known to affect the brain, eyes, muscular tissues and subcutaneous tissues. However, it is very rare for patients with ocular cysts to have concomitant cerebral cysts. Also, the dominant clinical manifestation of patients with cerebral cysts is either seizures or headache. We report a patient who presented with acute monocular painless vision loss due to intraocular submacular cysticercosis, who on investigation had multiple cerebral parenchymal cysticercal cysts, but never had any seizures. Although such a vision loss after initiation of antiparasitic treatment has been mentioned previously, acute monocular vision loss as the presenting feature of ocular cysticercosis is rare. We present a brief review of literature along with this case report.

  5. Monocular Vision SLAM for Indoor Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Koray Çelik

    2013-01-01

    Full Text Available This paper presents a novel indoor navigation and ranging strategy via monocular camera. By exploiting the architectural orthogonality of the indoor environments, we introduce a new method to estimate range and vehicle states from a monocular camera for vision-based SLAM. The navigation strategy assumes an indoor or indoor-like manmade environment whose layout is previously unknown, GPS-denied, representable via energy based feature points, and straight architectural lines. We experimentally validate the proposed algorithms on a fully self-contained microaerial vehicle (MAV with sophisticated on-board image processing and SLAM capabilities. Building and enabling such a small aerial vehicle to fly in tight corridors is a significant technological challenge, especially in the absence of GPS signals and with limited sensing options. Experimental results show that the system is only limited by the capabilities of the camera and environmental entropy.

  6. Monocular display unit for 3D display with correct depth perception

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  7. How the Venetian Blind Percept Emergesfrom the Laminar Cortical Dynamics of 3D Vision

    Directory of Open Access Journals (Sweden)

    Stephen eGrossberg

    2014-08-01

    Full Text Available The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model shows how identified neurons that interact in hierarchically organized laminar circuits of the visual cortex can simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. The model describes how monocular and binocular oriented filtering interacts with later stages of 3D boundary formation and surface filling-in in the lateral geniculate nucleus (LGN and cortical areas V1, V2, and V4. It proposes how interactions between layers 4, 3B, and 2/3 in V1 and V2 contribute to stereopsis, and how binocular and monocular information combine to form 3D boundary and surface representations. The model suggests how surface-to-boundary feedback from V2 thin stripes to pale stripes enables computationally complementary boundary and surface formation properties to generate a single consistent percept, eliminate redundant 3D boundaries, and trigger figure-ground perception. The model also shows how false binocular boundary matches may be eliminated by Gestalt grouping properties. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity

  8. An Approach for Environment Mapping and Control of Wall Follower Cellbot Through Monocular Vision and Fuzzy System

    OpenAIRE

    Farias, Karoline de M.; Rodrigues Junior, WIlson Leal; Bezerra Neto, Ranulfo P.; Rabelo, Ricardo A. L.; Santana, Andre M.

    2017-01-01

    This paper presents an approach using range measurement through homography calculation to build 2D visual occupancy grid and control the robot through monocular vision. This approach is designed for a Cellbot architecture. The robot is equipped with wall following behavior to explore the environment, which enables the robot to trail objects contours, residing in the fuzzy control the responsibility to provide commands for the correct execution of the robot movements while facing the advers...

  9. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Yanchao Dong

    2016-07-01

    Full Text Available The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  10. A flexible approach to light pen calibration for a monocular-vision-based coordinate measuring system

    International Nuclear Information System (INIS)

    Fu, Shuai; Zhang, Liyan; Ye, Nan; Liu, Shenglan; Zhang, WeiZhong

    2014-01-01

    A monocular-vision-based coordinate measuring system (MVB-CMS) obtains the 3D coordinates of the probe tip center of a light pen by analyzing the monocular image of the target points on the light pen. The light pen calibration, including the target point calibration and the probe tip center calibration, is critical to guarantee the accuracy of the MVB-CMS. The currently used method resorts to special equipment to calibrate the feature points on the light pen in a separate offsite procedure and uses the system camera to calibrate the probe tip center onsite. Instead, a complete onsite light pen calibration method is proposed in this paper. It needs only several auxiliary target points with the same visual features of the light pen targets and two or more cone holes with known distance(s). The target point calibration and the probe tip center calibration are jointly implemented by simply taking two groups of images of the light pen with the camera of the system. The proposed method requires no extra equipment other than the system camera for the calibration, so it is easier to implement and flexible for use. It has been incorporated in a large field-of-view MVB-CMS, which uses active luminous infrared LEDs as the target points. Experimental results demonstrate the accuracy and effectiveness of the proposed method. (paper)

  11. Effects of brief daily periods of unrestricted vision during early monocular form deprivation on development of visual area 2.

    Science.gov (United States)

    Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Harwerth, Ronald S; Smith, Earl L; Chino, Yuzo M

    2011-09-14

    Providing brief daily periods of unrestricted vision during early monocular form deprivation reduces the depth of amblyopia. To gain insights into the neural basis of the beneficial effects of this treatment, the binocular and monocular response properties of neurons were quantitatively analyzed in visual area 2 (V2) of form-deprived macaque monkeys. Beginning at 3 weeks of age, infant monkeys were deprived of clear vision in one eye for 12 hours every day until 21 weeks of age. They received daily periods of unrestricted vision for 0, 1, 2, or 4 hours during the form-deprivation period. After behavioral testing to measure the depth of the resulting amblyopia, microelectrode-recording experiments were conducted in V2. The ocular dominance imbalance away from the affected eye was reduced in the experimental monkeys and was generally proportional to the reduction in the depth of amblyopia in individual monkeys. There were no interocular differences in the spatial properties of V2 neurons in any subject group. However, the binocular disparity sensitivity of V2 neurons was significantly higher and binocular suppression was lower in monkeys that had unrestricted vision. The decrease in ocular dominance imbalance in V2 was the neuronal change most closely associated with the observed reduction in the depth of amblyopia. The results suggest that the degree to which extrastriate neurons can maintain functional connections with the deprived eye (i.e., reducing undersampling for the affected eye) is the most significant factor associated with the beneficial effects of brief periods of unrestricted vision.

  12. [Acute monocular loss of vision : Differential diagnostic considerations apart from the internistic etiological clarification].

    Science.gov (United States)

    Rickmann, A; Macek, M A; Szurman, P; Boden, K

    2017-08-03

    We report the case of acute painless monocular loss of vision in a 53-year-old man. An interdisciplinary etiological evaluation remained without pathological findings with respect to arterial branch occlusion. A reevaluation of the patient history led to a possible association with the administration of phosphodiesterase type 5 inhibitor (PDE5 inhibitor). A critical review of the literature on PDE5 inhibitor administration with ocular participation was performed.

  13. How the venetian blind percept emerges from the laminar cortical dynamics of 3D vision.

    Science.gov (United States)

    Cao, Yongqiang; Grossberg, Stephen

    2014-01-01

    The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and to show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model proposes how lateral geniculate nucleus (LGN) and hierarchically organized laminar circuits in cortical areas V1, V2, and V4 interact to control processes of 3D boundary formation and surface filling-in that simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. Interactions between layers 4, 3B, and 2/3 in V1 and V2 carry out stereopsis and 3D boundary formation. Both binocular and monocular information combine to form 3D boundary and surface representations. Surface contour surface-to-boundary feedback from V2 thin stripes to V2 pale stripes combines computationally complementary boundary and surface formation properties, leading to a single consistent percept, while also eliminating redundant 3D boundaries, and triggering figure-ground perception. False binocular boundary matches are eliminated by Gestalt grouping properties during boundary formation. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity-reversed stereograms, da Vinci stereopsis, and perceptual closure. These model mechanisms have also simulated properties of 3D neon

  14. Cross-orientation masking in human color vision: application of a two-stage model to assess dichoptic and monocular sources of suppression.

    Science.gov (United States)

    Kim, Yeon Jin; Gheiratmand, Mina; Mullen, Kathy T

    2013-05-28

    Cross-orientation masking (XOM) occurs when the detection of a test grating is masked by a superimposed grating at an orthogonal orientation, and is thought to reveal the suppressive effects mediating contrast normalization. Medina and Mullen (2009) reported that XOM was greater for chromatic than achromatic stimuli at equivalent spatial and temporal frequencies. Here we address whether the greater suppression found in binocular color vision originates from a monocular or interocular site, or both. We measure monocular and dichoptic masking functions for red-green color contrast and achromatic contrast at three different spatial frequencies (0.375, 0.75, and 1.5 cpd, 2 Hz). We fit these functions with a modified two-stage masking model (Meese & Baker, 2009) to extract the monocular and interocular weights of suppression. We find that the weight of monocular suppression is significantly higher for color than achromatic contrast, whereas dichoptic suppression is similar for both. These effects are invariant across spatial frequency. We then apply the model to the binocular masking data using the measured values of the monocular and interocular sources of suppression and show that these are sufficient to account for color binocular masking. We conclude that the greater strength of chromatic XOM has a monocular origin that transfers through to the binocular site.

  15. Monocular zones in stereoscopic scenes: A useful source of information for human binocular vision?

    Science.gov (United States)

    Harris, Julie M.

    2010-02-01

    When an object is closer to an observer than the background, the small differences between right and left eye views are interpreted by the human brain as depth. This basic ability of the human visual system, called stereopsis, lies at the core of all binocular three-dimensional (3-D) perception and related technological display development. To achieve stereopsis, it is traditionally assumed that corresponding locations in the right and left eye's views must first be matched, then the relative differences between right and left eye locations are used to calculate depth. But this is not the whole story. At every object-background boundary, there are regions of the background that only one eye can see because, in the other eye's view, the foreground object occludes that region of background. Such monocular zones do not have a corresponding match in the other eye's view and can thus cause problems for depth extraction algorithms. In this paper I will discuss evidence, from our knowledge of human visual perception, illustrating that monocular zones do not pose problems for our human visual systems, rather, our visual systems can extract depth from such zones. I review the relevant human perception literature in this area, and show some recent data aimed at quantifying the perception of depth from monocular zones. The paper finishes with a discussion of the potential importance of considering monocular zones, for stereo display technology and depth compression algorithms.

  16. Generalization of Figure-Ground Segmentation from Binocular to Monocular Vision in an Embodied Biological Brain Model

    Science.gov (United States)

    2011-08-01

    figure and ground the luminance cue breaks down and gestalt contours can fail to pop out. In this case we rely on color, which, having weak stereopsis...REPORT Generalization of Figure - Ground Segmentation from Monocular to Binocular Vision in an Embodied Biological Brain Model 14. ABSTRACT 16. SECURITY...U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS figure - ground , neural network, object

  17. Handbook of 3D machine vision optical metrology and imaging

    CERN Document Server

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  18. 3D vision in a virtual reality robotics environment

    Science.gov (United States)

    Schutz, Christian L.; Natonek, Emerico; Baur, Charles; Hugli, Heinz

    1996-12-01

    Virtual reality robotics (VRR) needs sensing feedback from the real environment. To show how advanced 3D vision provides new perspectives to fulfill these needs, this paper presents an architecture and system that integrates hybrid 3D vision and VRR and reports about experiments and results. The first section discusses the advantages of virtual reality in robotics, the potential of a 3D vision system in VRR and the contribution of a knowledge database, robust control and the combination of intensity and range imaging to build such a system. Section two presents the different modules of a hybrid 3D vision architecture based on hypothesis generation and verification. Section three addresses the problem of the recognition of complex, free- form 3D objects and shows how and why the newer approaches based on geometric matching solve the problem. This free- form matching can be efficiently integrated in a VRR system as a hypothesis generation knowledge-based 3D vision system. In the fourth part, we introduce the hypothesis verification based on intensity images which checks object pose and texture. Finally, we show how this system has been implemented and operates in a practical VRR environment used for an assembly task.

  19. A Monocular Vision Measurement System of Three-Degree-of-Freedom Air-Bearing Test-Bed Based on FCCSP

    Science.gov (United States)

    Gao, Zhanyu; Gu, Yingying; Lv, Yaoyu; Xu, Zhenbang; Wu, Qingwen

    2018-06-01

    A monocular vision-based pose measurement system is provided for real-time measurement of a three-degree-of-freedom (3-DOF) air-bearing test-bed. Firstly, a circular plane cooperative target is designed. An image of a target fixed on the test-bed is then acquired. Blob analysis-based image processing is used to detect the object circles on the target. A fast algorithm (FCCSP) based on pixel statistics is proposed to extract the centers of object circles. Finally, pose measurements can be obtained when combined with the centers and the coordinate transformation relation. Experiments show that the proposed method is fast, accurate, and robust enough to satisfy the requirement of the pose measurement.

  20. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Kuo-Lung Huang

    2015-07-01

    Full Text Available The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft’s nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft’s nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  1. 3D gaze tracking system for NVidia 3D Vision®.

    Science.gov (United States)

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  2. Three-dimensional vision enhances task performance independently of the surgical method.

    Science.gov (United States)

    Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A

    2012-10-01

    Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.

  3. Optimization of dynamic envelope measurement system for high speed train based on monocular vision

    Science.gov (United States)

    Wu, Bin; Liu, Changjie; Fu, Luhua; Wang, Zhong

    2018-01-01

    The definition of dynamic envelope curve is the maximum limit outline caused by various adverse effects during the running process of the train. It is an important base of making railway boundaries. At present, the measurement work of dynamic envelope curve of high-speed vehicle is mainly achieved by the way of binocular vision. There are some problems of the present measuring system like poor portability, complicated process and high cost. A new measurement system based on the monocular vision measurement theory and the analysis on the test environment is designed and the measurement system parameters, the calibration of camera with wide field of view, the calibration of the laser plane are designed and optimized in this paper. The accuracy has been verified to be up to 2mm by repeated tests and experimental data analysis. The feasibility and the adaptability of the measurement system is validated. There are some advantages of the system like lower cost, a simpler measurement and data processing process, more reliable data. And the system needs no matching algorithm.

  4. Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion.

    Science.gov (United States)

    Leimkuhler, Thomas; Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter

    2018-06-01

    We propose a system to infer binocular disparity from a monocular video stream in real-time. Different from classic reconstruction of physical depth in computer vision, we compute perceptually plausible disparity, that is numerically inaccurate, but results in a very similar overall depth impression with plausible overall layout, sharp edges, fine details and agreement between luminance and disparity. We use several simple monocular cues to estimate disparity maps and confidence maps of low spatial and temporal resolution in real-time. These are complemented by spatially-varying, appearance-dependent and class-specific disparity prior maps, learned from example stereo images. Scene classification selects this prior at runtime. Fusion of prior and cues is done by means of robust MAP inference on a dense spatio-temporal conditional random field with high spatial and temporal resolution. Using normal distributions allows this in constant-time, parallel per-pixel work. We compare our approach to previous 2D-to-3D conversion systems in terms of different metrics, as well as a user study and validate our notion of perceptually plausible disparity.

  5. 3D Vision Provides Shorter Operative Time and More Accurate Intraoperative Surgical Performance in Laparoscopic Hiatal Hernia Repair Compared With 2D Vision.

    Science.gov (United States)

    Leon, Piera; Rivellini, Roberta; Giudici, Fabiola; Sciuto, Antonio; Pirozzi, Felice; Corcione, Francesco

    2017-04-01

    The aim of this study is to evaluate if 3-dimensional high-definition (3D) vision in laparoscopy can prompt advantages over conventional 2D high-definition vision in hiatal hernia (HH) repair. Between September 2012 and September 2015, we randomized 36 patients affected by symptomatic HH to undergo surgery; 17 patients underwent 2D laparoscopic HH repair, whereas 19 patients underwent the same operation in 3D vision. No conversion to open surgery occurred. Overall operative time was significantly reduced in the 3D laparoscopic group compared with the 2D one (69.9 vs 90.1 minutes, P = .006). Operative time to perform laparoscopic crura closure did not differ significantly between the 2 groups. We observed a tendency to a faster crura closure in the 3D group in the subgroup of patients with mesh positioning (7.5 vs 8.9 minutes, P = .09). Nissen fundoplication was faster in the 3D group without mesh positioning ( P = .07). 3D vision in laparoscopic HH repair helps surgeon's visualization and seems to lead to operative time reduction. Advantages can result from the enhanced spatial perception of narrow spaces. Less operative time and more accurate surgery translate to benefit for patients and cost savings, compensating the high costs of the 3D technology. However, more data from larger series are needed to firmly assess the advantages of 3D over 2D vision in laparoscopic HH repair.

  6. Monocular and binocular development in children with albinism, infantile nystagmus syndrome, and normal vision.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke

    2013-12-01

    To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Interocular acuity differences and binocular summation ratios were compared between groups. Crowding ratios were calculated by dividing the single Landolt C decimal acuity with the crowded Landolt C decimal acuity mono- and binocularly. A linear regression analysis was conducted to investigate the contribution of 5 predictors to the monocular and binocular crowding ratio: nystagmus amplitude, nystagmus frequency, strabismus, astigmatism, and anisometropia. Crowding ratios were higher under mono- and binocular viewing conditions for children with infantile nystagmus syndrome than for children with normal vision. Children with albinism showed higher crowding ratios in their poorer eye and under binocular viewing conditions than children with normal vision. Children with albinism and children with infantile nystagmus syndrome showed larger interocular acuity differences than children with normal vision (0.1 logMAR in our clinical groups and 0.0 logMAR in children with normal vision). Binocular summation ratios did not differ between groups. Strabismus and nystagmus amplitude predicted the crowding ratio in the poorer eye (p = 0.015 and p = 0.005, respectively). The crowding ratio in the better eye showed a marginally significant relation with nystagmus frequency and depth of anisometropia (p = 0.082 and p = 0.070, respectively). The binocular crowding ratio was not predicted by any of the variables. Children with albinism and children with infantile nystagmus syndrome show larger interocular acuity differences than children with normal vision. Strabismus and nystagmus amplitude are significant predictors of the crowding ratio in the poorer eye.

  7. An Analytical Measuring Rectification Algorithm of Monocular Systems in Dynamic Environment

    Directory of Open Access Journals (Sweden)

    Deshi Li

    2016-01-01

    Full Text Available Range estimation is crucial for maintaining a safe distance, in particular for vision navigation and localization. Monocular autonomous vehicles are appropriate for outdoor environment due to their mobility and operability. However, accurate range estimation using vision system is challenging because of the nonholonomic dynamics and susceptibility of vehicles. In this paper, a measuring rectification algorithm for range estimation under shaking conditions is designed. The proposed method focuses on how to estimate range using monocular vision when a shake occurs and the algorithm only requires the pose variations of the camera to be acquired. Simultaneously, it solves the problem of how to assimilate results from different kinds of sensors. To eliminate measuring errors by shakes, we establish a pose-range variation model. Afterwards, the algebraic relation between distance increment and a camera’s poses variation is formulated. The pose variations are presented in the form of roll, pitch, and yaw angle changes to evaluate the pixel coordinate incensement. To demonstrate the superiority of our proposed algorithm, the approach is validated in a laboratory environment using Pioneer 3-DX robots. The experimental results demonstrate that the proposed approach improves in the range accuracy significantly.

  8. 3-D Vision Techniques for Autonomous Vehicles

    Science.gov (United States)

    1988-08-01

    TITLE (Include Security Classification) W 3-D Vision Techniques for Autonomous Vehicles 12 PERSONAL AUTHOR(S) Martial Hebert, Takeo Kanade, inso Kweoni... Autonomous Vehicles Martial Hebert, Takeo Kanade, Inso Kweon CMU-RI-TR-88-12 The Robotics Institute Carnegie Mellon University Acession For Pittsburgh

  9. Evaluation of vision training using 3D play game

    Science.gov (United States)

    Kim, Jung-Ho; Kwon, Soon-Chul; Son, Kwang-Chul; Lee, Seung-Hyun

    2015-03-01

    The present study aimed to examine the effect of the vision training, which is a benefit of watching 3D video images (3D video shooting game in this study), focusing on its accommodative facility and vergence facility. Both facilities, which are the scales used to measure human visual performance, are very important factors for man in leading comfortable and easy life. This study was conducted on 30 participants in their 20s through 30s (19 males and 11 females at 24.53 ± 2.94 years), who can watch 3D video images and play 3D game. Their accommodative and vergence facility were measured before and after they watched 2D and 3D game. It turned out that their accommodative facility improved after they played both 2D and 3D games and more improved right after they played 3D game than 2D game. Likewise, their vergence facility was proved to improve after they played both 2D and 3D games and more improved soon after they played 3D game than 2D game. In addition, it was demonstrated that their accommodative facility improved to greater extent than their vergence facility. While studies have been so far conducted on the adverse effects of 3D contents, from the perspective of human factor, on the imbalance of visual accommodation and convergence, the present study is expected to broaden the applicable scope of 3D contents by utilizing the visual benefit of 3D contents for vision training.

  10. 3D vision system for intelligent milking robot automation

    Science.gov (United States)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  11. Detection and Tracking Strategies for Autonomous Aerial Refuelling Tasks Based on Monocular Vision

    Directory of Open Access Journals (Sweden)

    Yingjie Yin

    2014-07-01

    Full Text Available Detection and tracking strategies based on monocular vision are proposed for autonomous aerial refuelling tasks. The drogue attached to the fuel tanker aircraft has two important features. The grey values of the drogue's inner part are different from the external umbrella ribs, as shown in the image. The shape of the drogue's inner dark part is nearly circular. According to crucial prior knowledge, the rough and fine positioning algorithms are designed to detect the drogue. Particle filter based on the drogue's shape is proposed to track the drogue. A strategy to switch between detection and tracking is proposed to improve the robustness of the algorithms. The inner dark part of the drogue is segmented precisely in the detecting and tracking process and the segmented circular part can be used to measure its spatial position. The experimental results show that the proposed method has good performance in real-time and satisfied robustness and positioning accuracy.

  12. Interlopers 3D: experiences designing a stereoscopic game

    Science.gov (United States)

    Weaver, James; Holliman, Nicolas S.

    2014-03-01

    Background In recent years 3D-enabled televisions, VR headsets and computer displays have become more readily available in the home. This presents an opportunity for game designers to explore new stereoscopic game mechanics and techniques that have previously been unavailable in monocular gaming. Aims To investigate the visual cues that are present in binocular and monocular vision, identifying which are relevant when gaming using a stereoscopic display. To implement a game whose mechanics are so reliant on binocular cues that the game becomes impossible or at least very difficult to play in non-stereoscopic mode. Method A stereoscopic 3D game was developed whose objective was to shoot down advancing enemies (the Interlopers) before they reached their destination. Scoring highly required players to make accurate depth judgments and target the closest enemies first. A group of twenty participants played both a basic and advanced version of the game in both monoscopic 2D and stereoscopic 3D. Results The results show that in both the basic and advanced game participants achieved higher scores when playing in stereoscopic 3D. The advanced game showed that by disrupting the depth from motion cue the game became more difficult in monoscopic 2D. Results also show a certain amount of learning taking place over the course of the experiment, meaning that players were able to score higher and finish the game faster over the course of the experiment. Conclusions Although the game was not impossible to play in monoscopic 2D, participants results show that it put them at a significant disadvantage when compared to playing in stereoscopic 3D.

  13. Stereo-vision and 3D reconstruction for nuclear mobile robots

    International Nuclear Information System (INIS)

    Lecoeur-Taibi, I.; Vacherand, F.; Rivallin, P.

    1991-01-01

    In order to perceive the geometric structure of the surrounding environment of a mobile robot, a 3D reconstruction system has been developed. Its main purpose is to provide geometric information to an operator who has to telepilot the vehicle in a nuclear power plant. The perception system is split into two parts: the vision part and the map building part. Vision is enhanced with a fusion process that rejects bas samples over space and time. The vision is based on trinocular stereo-vision which provides a range image of the image contours. It performs line contour correlation on horizontal image pairs and vertical image pairs. The results are then spatially fused in order to have one distance image, with a quality independent of the orientation of the contour. The 3D reconstruction is based on grid-based sensor fusion. As the robot moves and perceives its environment, distance data is accumulated onto a regular square grid, taking into account the uncertainty of the sensor through a sensor measurement statistical model. This approach allows both spatial and temporal fusion. Uncertainty due to sensor position and robot position is also integrated into the absolute local map. This system is modular and generic and can integrate 2D laser range finder and active vision. (author)

  14. The monocular visual imaging technology model applied in the airport surface surveillance

    Science.gov (United States)

    Qin, Zhe; Wang, Jian; Huang, Chao

    2013-08-01

    At present, the civil aviation airports use the surface surveillance radar monitoring and positioning systems to monitor the aircrafts, vehicles and the other moving objects. Surface surveillance radars can cover most of the airport scenes, but because of the terminals, covered bridges and other buildings geometry, surface surveillance radar systems inevitably have some small segment blind spots. This paper presents a monocular vision imaging technology model for airport surface surveillance, achieving the perception of scenes of moving objects such as aircrafts, vehicles and personnel location. This new model provides an important complement for airport surface surveillance, which is different from the traditional surface surveillance radar techniques. Such technique not only provides clear objects activities screen for the ATC, but also provides image recognition and positioning of moving targets in this area. Thereby it can improve the work efficiency of the airport operations and avoid the conflict between the aircrafts and vehicles. This paper first introduces the monocular visual imaging technology model applied in the airport surface surveillance and then the monocular vision measurement accuracy analysis of the model. The monocular visual imaging technology model is simple, low cost, and highly efficient. It is an advanced monitoring technique which can make up blind spot area of the surface surveillance radar monitoring and positioning systems.

  15. Effect of Vision Therapy on Accommodation in Myopic Chinese Children

    Directory of Open Access Journals (Sweden)

    Martin Ming-Leung Ma

    2016-01-01

    Full Text Available Introduction. We evaluated the effectiveness of office-based accommodative/vergence therapy (OBAVT with home reinforcement to improve accommodative function in myopic children with poor accommodative response. Methods. This was a prospective unmasked pilot study. 14 Chinese myopic children aged 8 to 12 years with at least 1 D of lag of accommodation were enrolled. All subjects received 12 weeks of 60-minute office-based accommodative/vergence therapy (OBAVT with home reinforcement. Primary outcome measure was the change in monocular lag of accommodation from baseline visit to 12-week visit measured by Shinnipon open-field autorefractor. Secondary outcome measures were the changes in accommodative amplitude and monocular accommodative facility. Results. All participants completed the study. The lag of accommodation at baseline visit was 1.29 ± 0.21 D and it was reduced to 0.84 ± 0.19 D at 12-week visit. This difference (−0.46 ± 0.22 D; 95% confidence interval: −0.33 to −0.58 D is statistically significant (p<0.0001. OBAVT also increased the amplitude and facility by 3.66 ± 3.36 D (p=0.0013; 95% confidence interval: 1.72 to 5.60 D and 10.9 ± 4.8 cpm (p<0.0001; 95% confidence interval: 8.1 to 13.6 cpm, respectively. Conclusion. Standardized 12 weeks of OBAVT with home reinforcement is able to significantly reduce monocular lag of accommodation and increase monocular accommodative amplitude and facility. A randomized clinical trial designed to investigate the effect of vision therapy on myopia progression is warranted.

  16. 3D vision upgrade kit for TALON robot

    Science.gov (United States)

    Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad

    2010-04-01

    In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  17. Automatic Plant Annotation Using 3D Computer Vision

    DEFF Research Database (Denmark)

    Nielsen, Michael

    In this thesis 3D reconstruction was investigated for application in precision agriculture where previous work focused on low resolution index maps where each pixel represents an area in the field and the index represents an overall crop status in that area. 3D reconstructions of plants would allow...... reconstruction in occluded areas. The trinocular setup was used for both window correlation based and energy minimization based algorithms. A novel adaption of symmetric multiple windows algorithm with trinocular vision was developed. The results were promising and allowed for better disparity estimations...... on steep sloped surfaces. Also, a novel adaption of a well known graph cut based disparity estimation algorithm with trinocular vision was developed and tested. The results were successful and allowed for better disparity estimations on steep sloped surfaces. After finding the disparity maps each...

  18. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Science.gov (United States)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  19. Fiber optic coherent laser radar 3D vision system

    International Nuclear Information System (INIS)

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-01-01

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution

  20. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras

    OpenAIRE

    Mur-Artal, Raul; Tardos, Juan D.

    2016-01-01

    We present ORB-SLAM2 a complete SLAM system for monocular, stereo and RGB-D cameras, including map reuse, loop closing and relocalization capabilities. The system works in real-time on standard CPUs in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end based on bundle adjustment with monocular and stereo observations allows for accurate trajectory estimation with metric scale. Our syst...

  1. Fractal tomography and its application in 3D vision

    Science.gov (United States)

    Trubochkina, N.

    2018-01-01

    A three-dimensional artistic fractal tomography method that implements a non-glasses 3D visualization of fractal worlds in layered media is proposed. It is designed for the glasses-free 3D vision of digital art objects and films containing fractal content. Prospects for the development of this method in art galleries and the film industry are considered.

  2. Transformation of light double cones in the human retina: the origin of trichromatism, of 4D-spatiotemporal vision, and of patchwise 4D Fourier transformation in Talbot imaging

    Science.gov (United States)

    Lauinger, Norbert

    1997-09-01

    The interpretation of the 'inverted' retina of primates as an 'optoretina' (a light cones transforming diffractive cellular 3D-phase grating) integrates the functional, structural, and oscillatory aspects of a cortical layer. It is therefore relevant to consider prenatal developments as a basis of the macro- and micro-geometry of the inner eye. This geometry becomes relevant for the postnatal trichromatic synchrony organization (TSO) as well as the adaptive levels of human vision. It is shown that the functional performances, the trichromatism in photopic vision, the monocular spatiotemporal 3D- and 4D-motion detection, as well as the Fourier optical image transformation with extraction of invariances all become possible. To transform light cones into reciprocal gratings especially the spectral phase conditions in the eikonal of the geometrical optical imaging before the retinal 3D-grating become relevant first, then in the von Laue resp. reciprocal von Laue equation for 3D-grating optics inside the grating and finally in the periodicity of Talbot-2/Fresnel-planes in the near-field behind the grating. It is becoming possible to technically realize -- at least in some specific aspects -- such a cortical optoretina sensor element with its typical hexagonal-concentric structure which leads to these visual functions.

  3. GPU-accelerated 3-D model-based tracking

    International Nuclear Information System (INIS)

    Brown, J Anthony; Capson, David W

    2010-01-01

    Model-based approaches to tracking the pose of a 3-D object in video are effective but computationally demanding. While statistical estimation techniques, such as the particle filter, are often employed to minimize the search space, real-time performance remains unachievable on current generation CPUs. Recent advances in graphics processing units (GPUs) have brought massively parallel computational power to the desktop environment and powerful developer tools, such as NVIDIA Compute Unified Device Architecture (CUDA), have provided programmers with a mechanism to exploit it. NVIDIA GPUs' single-instruction multiple-thread (SIMT) programming model is well-suited to many computer vision tasks, particularly model-based tracking, which requires several hundred 3-D model poses to be dynamically configured, rendered, and evaluated against each frame in the video sequence. Using 6 degree-of-freedom (DOF) rigid hand tracking as an example application, this work harnesses consumer-grade GPUs to achieve real-time, 3-D model-based, markerless object tracking in monocular video.

  4. Impact of 3D vision on mental workload and laparoscopic performance in inexperienced subjects.

    Science.gov (United States)

    Gómez-Gómez, E; Carrasco-Valiente, J; Valero-Rosa, J; Campos-Hernández, J P; Anglada-Curado, F J; Carazo-Carazo, J L; Font-Ugalde, P; Requena-Tapia, M J

    2015-05-01

    To assess the effect of vision in three dimensions (3D) versus two dimensions (2D) on mental workload and laparoscopic performance during simulation-based training. A prospective, randomized crossover study on inexperienced students in operative laparoscopy was conducted. Forty-six candidates executed five standardized exercises on a pelvitrainer with both vision systems (3D and 2D). Laparoscopy performance was assessed using the total time (in seconds) and the number of failed attempts. For workload assessment, the validated NASA-TLX questionnaire was administered. 3D vision improves the performance reducing the time (3D = 1006.08 ± 315.94 vs. 2D = 1309.17 ± 300.28; P NASA-TLX results, less mental workload is experienced with the use of 3D (P < .001). However, 3D vision was associated with greater visual impairment (P < .01) and headaches (P < .05). The incorporation of 3D systems in laparoscopic training programs would facilitate the acquisition of laparoscopic skills, because they reduce mental workload and improve the performance on inexperienced surgeons. However, some undesirable effects such as visual discomfort or headache are identified initially. Copyright © 2014 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  5. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss

    Science.gov (United States)

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960’s on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research. PMID:25972788

  6. 3D Machine Vision and Additive Manufacturing: Concurrent Product and Process Development

    International Nuclear Information System (INIS)

    Ilyas, Ismet P

    2013-01-01

    The manufacturing environment rapidly changes in turbulence fashion. Digital manufacturing (DM) plays a significant role and one of the key strategies in setting up vision and strategic planning toward the knowledge based manufacturing. An approach of combining 3D machine vision (3D-MV) and an Additive Manufacturing (AM) may finally be finding its niche in manufacturing. This paper briefly overviews the integration of the 3D machine vision and AM in concurrent product and process development, the challenges and opportunities, the implementation of the 3D-MV and AM at POLMAN Bandung in accelerating product design and process development, and discusses a direct deployment of this approach on a real case from our industrial partners that have placed this as one of the very important and strategic approach in research as well as product/prototype development. The strategic aspects and needs of this combination approach in research, design and development are main concerns of the presentation.

  7. Enhanced operator perception through 3D vision and haptic feedback

    Science.gov (United States)

    Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren

    2012-06-01

    Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.

  8. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  9. Improving automated 3D reconstruction methods via vision metrology

    Science.gov (United States)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  10. Anisometropia and ptosis in patients with monocular elevation deficiency

    International Nuclear Information System (INIS)

    Zafar, S.N.; Islam, F.; Khan, A.M.

    2016-01-01

    Objective: To determine the effect of ptosis on the refractive error in eyes having monocular elevation deficiency Place and Duration of Study: Al-Shifa Trust Eye Hospital, Rawalpindi, from January 2011 to January 2014. Methodology: Visual acuity, refraction, orthoptic assessment and ptosis evaluation of all patients having monocular elevation deficiency (MED) were recorded. Shapiro-Wilk test was used for tests of normality. Median and interquartile range (IQR) was calculated for the data. Non-parametric variables were compared, using the Wilcoxon signed ranks test. P-values of <0.05 were considered significant. Results: A total of of 41 MED patients were assessed during the study period. Best corrected visual acuity (BCVA) and refractive error was compared between the eyes having MED and the unaffected eyes of the same patient. The refractive status of patients having ptosis with MED were also compared with those having MED without ptosis. Astigmatic correction and vision had significant difference between both the eyes of the patients. Vision was significantly different between the two eyes of patients in both the groups having either presence or absence of ptosis (p=0.04 and p < 0.001, respectively). Conclusion: Significant difference in vision and anisoastigmatism was noted between the two eyes of patients with MED in this study. The presence or absence of ptosis affected the vision but did not have a significant effect on the spherical equivalent (SE) and astigmatic correction between both the eyes. (author)

  11. Distance and velocity estimation using optical flow from a monocular camera

    NARCIS (Netherlands)

    Ho, H.W.; de Croon, G.C.H.E.; Chu, Q.

    2016-01-01

    Monocular vision is increasingly used in Micro Air Vehicles for navigation. In particular, optical flow, inspired by flying insects, is used to perceive vehicles’ movement with respect to the surroundings or sense changes in the environment. However, optical flow does not directly provide us the

  12. Estimated Prevalence of Monocular Blindness and Monocular ...

    African Journals Online (AJOL)

    with MB/MSVI; among the 109 (51%) children with MB/MSVI that had a known etiology, trauma. Table 1: Major anatomical site of monocular blindness and monocular severe visual impairment in children. Anatomical cause. Total (%). Corneal scar. 89 (42). Whole globe. 43 (20). Lens. 42 (19). Amblyopia. 16 (8). Retina. 9 (4).

  13. Vector model for mapping of visual space to subjective 4-D sphere

    International Nuclear Information System (INIS)

    Matuzevicius, Dalius; Vaitkevicius, Henrikas

    2014-01-01

    Here we present a mathematical model of binocular vision that maps a visible physical world to a subjective perception of it. The subjective space is a set of 4-D vectors whose components are outputs of four monocular neurons from each of the two eyes. Monocular neurons have one of the four types of concentric receptive fields with Gabor-like weighting coefficients. Next this vector representation of binocular vision is implemented as a pool of neurons where each of them is selective to the object's particular location in a 3-D visual space. Formally each point of the visual space is being projected onto a 4-D sphere. Proposed model allows determination of subjective distances in depth and direction, provides computational means for determination of Panum's area and explains diplopia and allelotropia

  14. Performance evaluation of 3D vision-based semi-autonomous control method for assistive robotic manipulator.

    Science.gov (United States)

    Ka, Hyun W; Chung, Cheng-Shiu; Ding, Dan; James, Khara; Cooper, Rory

    2018-02-01

    We developed a 3D vision-based semi-autonomous control interface for assistive robotic manipulators. It was implemented based on one of the most popular commercially available assistive robotic manipulator combined with a low-cost depth-sensing camera mounted on the robot base. To perform a manipulation task with the 3D vision-based semi-autonomous control interface, a user starts operating with a manual control method available to him/her. When detecting objects within a set range, the control interface automatically stops the robot, and provides the user with possible manipulation options through audible text output, based on the detected object characteristics. Then, the system waits until the user states a voice command. Once the user command is given, the control interface drives the robot autonomously until the given command is completed. In the empirical evaluations conducted with human subjects from two different groups, it was shown that the semi-autonomous control can be used as an alternative control method to enable individuals with impaired motor control to more efficiently operate the robot arms by facilitating their fine motion control. The advantage of semi-autonomous control was not so obvious for the simple tasks. But, for the relatively complex real-life tasks, the 3D vision-based semi-autonomous control showed significantly faster performance. Implications for Rehabilitation A 3D vision-based semi-autonomous control interface will improve clinical practice by providing an alternative control method that is less demanding physically as well cognitively. A 3D vision-based semi-autonomous control provides the user with task specific intelligent semiautonomous manipulation assistances. A 3D vision-based semi-autonomous control gives the user the feeling that he or she is still in control at any moment. A 3D vision-based semi-autonomous control is compatible with different types of new and existing manual control methods for ARMs.

  15. An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

    Directory of Open Access Journals (Sweden)

    Yi-Ting Chen

    2015-05-01

    Full Text Available Robot decision making and motion control are commonly based on visual information in various applications. Position-based visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedicated hardware circuits for processing vision processing and motion control functions. This research conducts a preliminary study to explore the integration of 3D vision and robot motion control system design based on a single field programmable gate array (FPGA chip. The implemented motion-vision embedded system performs the following functions: filtering, image statistics, binary morphology, binary object analysis, object 3D position calculation, robot inverse kinematics, velocity profile generation, feedback counting, and multiple-axes position feedback control.

  16. Distance and velocity estimation using optical flow from a monocular camera

    NARCIS (Netherlands)

    Ho, H.W.; de Croon, G.C.H.E.; Chu, Q.

    2017-01-01

    Monocular vision is increasingly used in micro air vehicles for navigation. In particular, optical flow, inspired by flying insects, is used to perceive vehicle movement with respect to the surroundings or sense changes in the environment. However, optical flow does not directly provide us the

  17. Visual Suppression of Monocularly Presented Symbology Against a Fused Background in a Simulation and Training Environment

    National Research Council Canada - National Science Library

    Winterbottom, Marc D; Patterson, Robert; Pierce, Byron J; Taylor, Amanda

    2006-01-01

    .... This may create interocular differences in image characteristics that could disrupt binocular vision by provoking visual suppression, thus reducing visibility of the background scene, monocular symbology...

  18. How the Venetian Blind Percept Emergesfrom the Laminar Cortical Dynamics of 3D Vision

    OpenAIRE

    Stephen eGrossberg

    2014-01-01

    The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model shows how identified neurons that interact in hierarchically organized laminar circuits of the visual cortex can simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in s...

  19. Monocular Vision-Based Robot Localization and Target Tracking

    Directory of Open Access Journals (Sweden)

    Bing-Fei Wu

    2011-01-01

    Full Text Available This paper presents a vision-based technology for localizing targets in 3D environment. It is achieved by the combination of different types of sensors including optical wheel encoders, an electrical compass, and visual observations with a single camera. Based on the robot motion model and image sequences, extended Kalman filter is applied to estimate target locations and the robot pose simultaneously. The proposed localization system is applicable in practice because it is not necessary to have the initializing setting regarding starting the system from artificial landmarks of known size. The technique is especially suitable for navigation and target tracing for an indoor robot and has a high potential extension to surveillance and monitoring for Unmanned Aerial Vehicles with aerial odometry sensors. The experimental results present “cm” level accuracy of the localization of the targets in indoor environment under a high-speed robot movement.

  20. How the venetian blind percept emerges from the laminar cortical dynamics of 3D vision

    OpenAIRE

    Cao, Yongqiang; Grossberg, Stephen

    2014-01-01

    The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and to show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model proposes how lateral geniculate nucleus (LGN) and hierarchically organized laminar circuits in cortical areas V1, V2, and V4 interact to control processes of 3D boundary formation and surface filling-in that simulate many properties of 3D vision ...

  1. Prevalence of color vision deficiency among arc welders.

    Science.gov (United States)

    Heydarian, Samira; Mahjoob, Monireh; Gholami, Ahmad; Veysi, Sajjad; Mohammadi, Morteza

    This study was performed to investigate whether occupationally related color vision deficiency can occur from welding. A total of 50 male welders, who had been working as welders for at least 4 years, were randomly selected as case group, and 50 age matched non-welder men, who lived in the same area, were regarded as control group. Color vision was assessed using the Lanthony desatured panel D-15 test. The test was performed under the daylight fluorescent lamp with a spectral distribution of energy with a color temperature of 6500K and a color rendering index of 94 that provided 1000lx on the work plane. The test was carried out monocularly and no time limit was imposed. All data analysis were performed using SPSS, version 22. The prevalence of dyschromatopsia among welders was 15% which was statistically higher than that of nonwelder group (2%) (p=0.001). Among welders with dyschromatopsia, color vision deficiency in 72.7% of cases was monocular. There was positive relationship between the employment length and color vision loss (p=0.04). Similarly, a significant correlation was found between the prevalence of color vision deficiency and average working hours of welding a day (p=0.025). Chronic exposure to welding light may cause color vision deficiency. The damage depends on the exposure duration and the length of their employment as welders. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  2. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    Science.gov (United States)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  3. Implementation of Headtracking and 3D Stereo with Unity and VRPN for Computer Simulations

    Science.gov (United States)

    Noyes, Matthew A.

    2013-01-01

    This paper explores low-cost hardware and software methods to provide depth cues traditionally absent in monocular displays. The use of a VRPN server in conjunction with a Microsoft Kinect and/or Nintendo Wiimote to provide head tracking information to a Unity application, and NVIDIA 3D Vision for retinal disparity support, is discussed. Methods are suggested to implement this technology with NASA's EDGE simulation graphics package, along with potential caveats. Finally, future applications of this technology to astronaut crew training, particularly when combined with an omnidirectional treadmill for virtual locomotion and NASA's ARGOS system for reduced gravity simulation, are discussed.

  4. Robust object tracking techniques for vision-based 3D motion analysis applications

    Science.gov (United States)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  5. Binocular vision in amblyopia: structure, suppression and plasticity.

    Science.gov (United States)

    Hess, Robert F; Thompson, Benjamin; Baker, Daniel H

    2014-03-01

    The amblyopic visual system was once considered to be structurally monocular. However, it now evident that the capacity for binocular vision is present in many observers with amblyopia. This has led to new techniques for quantifying suppression that have provided insights into the relationship between suppression and the monocular and binocular visual deficits experienced by amblyopes. Furthermore, new treatments are emerging that directly target suppressive interactions within the visual cortex and, on the basis of initial data, appear to improve both binocular and monocular visual function, even in adults with amblyopia. The aim of this review is to provide an overview of recent studies that have investigated the structure, measurement and treatment of binocular vision in observers with strabismic, anisometropic and mixed amblyopia. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  6. Development of 3D online contact measurement system for intelligent manufacturing based on stereo vision

    Science.gov (United States)

    Li, Peng; Chong, Wenyan; Ma, Yongjun

    2017-10-01

    In order to avoid shortcomings of low efficiency and restricted measuring range exsited in traditional 3D on-line contact measurement method for workpiece size, the development of a novel 3D contact measurement system is introduced, which is designed for intelligent manufacturing based on stereo vision. The developed contact measurement system is characterized with an intergarted use of a handy probe, a binocular stereo vision system, and advanced measurement software.The handy probe consists of six track markers, a touch probe and the associated elcetronics. In the process of contact measurement, the hand probe can be located by the use of the stereo vision system and track markers, and 3D coordinates of a space point on the workpiece can be mearsured by calculating the tip position of a touch probe. With the flexibility of the hand probe, the orientation, range, density of the 3D contact measurenent can be adptable to different needs. Applications of the developed contact measurement system to high-precision measurement and rapid surface digitization are experimentally demonstrated.

  7. Monocular Perceptual Deprivation from Interocular Suppression Temporarily Imbalances Ocular Dominance.

    Science.gov (United States)

    Kim, Hyun-Woong; Kim, Chai-Youn; Blake, Randolph

    2017-03-20

    Early visual experience sculpts neural mechanisms that regulate the balance of influence exerted by the two eyes on cortical mechanisms underlying binocular vision [1, 2], and experience's impact on this neural balancing act continues into adulthood [3-5]. One recently described, compelling example of adult neural plasticity is the effect of patching one eye for a relatively short period of time: contrary to intuition, monocular visual deprivation actually improves the deprived eye's competitive advantage during a subsequent period of binocular rivalry [6-8], the robust form of visual competition prompted by dissimilar stimulation of the two eyes [9, 10]. Neural concomitants of this improvement in monocular dominance are reflected in measurements of brain responsiveness following eye patching [11, 12]. Here we report that patching an eye is unnecessary for producing this paradoxical deprivation effect: interocular suppression of an ordinarily visible stimulus being viewed by one eye is sufficient to produce shifts in subsequent predominance of that eye to an extent comparable to that produced by patching the eye. Moreover, this imbalance in eye dominance can also be induced by prior, extended viewing of two monocular images differing only in contrast. Regardless of how shifts in eye dominance are induced, the effect decays once the two eyes view stimuli equal in strength. These novel findings implicate the operation of interocular neural gain control that dynamically adjusts the relative balance of activity between the two eyes [13, 14]. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. 3D vision accelerates laparoscopic proficiency and skills are transferable to 2D conditions

    DEFF Research Database (Denmark)

    Sørensen, Stine Maya Dreier; Konge, Lars; Bjerrum, Flemming

    2017-01-01

    : Mean training time were reduced in the intervention group; 231 min versus 323 min; P = 0.012. There was no significant difference in the mean times to completion of the retention test; 92 min versus 95 min; P = 0.85. CONCLUSION: 3D vision reduced time to proficiency on a virtual-reality laparoscopy...

  9. A Vision-Aided 3D Path Teaching Method before Narrow Butt Joint Welding.

    Science.gov (United States)

    Zeng, Jinle; Chang, Baohua; Du, Dong; Peng, Guodong; Chang, Shuhe; Hong, Yuxiang; Wang, Li; Shan, Jiguo

    2017-05-11

    For better welding quality, accurate path teaching for actuators must be achieved before welding. Due to machining errors, assembly errors, deformations, etc., the actual groove position may be different from the predetermined path. Therefore, it is significant to recognize the actual groove position using machine vision methods and perform an accurate path teaching process. However, during the teaching process of a narrow butt joint, the existing machine vision methods may fail because of poor adaptability, low resolution, and lack of 3D information. This paper proposes a 3D path teaching method for narrow butt joint welding. This method obtains two kinds of visual information nearly at the same time, namely 2D pixel coordinates of the groove in uniform lighting condition and 3D point cloud data of the workpiece surface in cross-line laser lighting condition. The 3D position and pose between the welding torch and groove can be calculated after information fusion. The image resolution can reach 12.5 μm. Experiments are carried out at an actuator speed of 2300 mm/min and groove width of less than 0.1 mm. The results show that this method is suitable for groove recognition before narrow butt joint welding and can be applied in path teaching fields of 3D complex components.

  10. Monocular SLAM for autonomous robots with enhanced features initialization.

    Science.gov (United States)

    Guerra, Edmundo; Munguia, Rodrigo; Grau, Antoni

    2014-04-02

    This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI) framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM), a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DI-D monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.

  11. Monocular Elevation Deficiency - Double Elevator Palsy

    Science.gov (United States)

    ... Español Condiciones Chinese Conditions Monocular Elevation Deficiency/ Double Elevator Palsy En Español Read in Chinese What is monocular elevation deficiency (Double Elevator Palsy)? Monocular Elevation Deficiency, also known by the ...

  12. SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.

    Science.gov (United States)

    Chen, Long; Tang, Wen; John, Nigel W; Wan, Tao Ruan; Zhang, Jian Jun

    2018-05-01

    While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeon's performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations

  13. Estimation of 3D reconstruction errors in a stereo-vision system

    Science.gov (United States)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  14. Faster acquisition of laparoscopic skills in virtual reality with haptic feedback and 3D vision.

    Science.gov (United States)

    Hagelsteen, Kristine; Langegård, Anders; Lantz, Adam; Ekelund, Mikael; Anderberg, Magnus; Bergenfelz, Anders

    2017-10-01

    The study investigated whether 3D vision and haptic feedback in combination in a virtual reality environment leads to more efficient learning of laparoscopic skills in novices. Twenty novices were allocated to two groups. All completed a training course in the LapSim ® virtual reality trainer consisting of four tasks: 'instrument navigation', 'grasping', 'fine dissection' and 'suturing'. The study group performed with haptic feedback and 3D vision and the control group without. Before and after the LapSim ® course, the participants' metrics were recorded when tying a laparoscopic knot in the 2D video box trainer Simball ® Box. The study group completed the training course in 146 (100-291) minutes compared to 215 (175-489) minutes in the control group (p = .002). The number of attempts to reach proficiency was significantly lower. The study group had significantly faster learning of skills in three out of four individual tasks; instrument navigation, grasping and suturing. Using the Simball ® Box, no difference in laparoscopic knot tying after the LapSim ® course was noted when comparing the groups. Laparoscopic training in virtual reality with 3D vision and haptic feedback made training more time efficient and did not negatively affect later video box-performance in 2D. [Formula: see text].

  15. Visual memory for objects following foveal vision loss.

    Science.gov (United States)

    Geringswald, Franziska; Herbik, Anne; Hofmüller, Wolfram; Hoffmann, Michael B; Pollmann, Stefan

    2015-09-01

    Allocation of visual attention is crucial for encoding items into visual long-term memory. In free vision, attention is closely linked to the center of gaze, raising the question whether foveal vision loss entails suboptimal deployment of attention and subsequent impairment of object encoding. To investigate this question, we examined visual long-term memory for objects in patients suffering from foveal vision loss due to age-related macular degeneration. We measured patients' change detection sensitivity after a period of free scene exploration monocularly with their worse eye when possible, and under binocular vision, comparing sensitivity and eye movements to matched normal-sighted controls. A highly salient cue was used to capture attention to a nontarget location before a target change occurred in half of the trials, ensuring that change detection relied on memory. Patients' monocular and binocular sensitivity to object change was comparable to controls, even after more than 4 intervening fixations, and not significantly correlated with visual impairment. We conclude that extrafoveal vision suffices for efficient encoding into visual long-term memory. (c) 2015 APA, all rights reserved).

  16. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    Science.gov (United States)

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.

  17. Binocular vision in amblyopia : structure, suppression and plasticity

    OpenAIRE

    Hess, Robert F; Thompson, Benjamin; Baker, Daniel Hart

    2014-01-01

    The amblyopic visual system was once considered to be structurally monocular. However, it now evident that the capacity for binocular vision is present in many observers with amblyopia. This has led to new techniques for quantifying suppression that have provided insights into the relationship between suppression and the monocular and binocular visual deficits experienced by amblyopes. Furthermore, new treatments are emerging that directly target suppressive interactions within the visual cor...

  18. Dopamine antagonists and brief vision distinguish lens-induced- and form-deprivation-induced myopia

    OpenAIRE

    Nickla, Debora L.; Totonelly, Kristen

    2011-01-01

    In eyes wearing negative lenses, the D2 dopamine antagonist spiperone was only partly effective in preventing the ameliorative effects of brief periods of vision (Nickla et al., 2010), in contrast to reports from studies using form deprivation. The present study was done to directly compare the effects of spiperone, and the D1 antagonist SCH-23390, on the two different myopiagenic paradigms. 12-day old chickens wore monocular diffusers (form deprivation) or − 10 D lenses attached to the feath...

  19. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    OpenAIRE

    Edmundo Guerra; Rodrigo Munguia; Yolanda Bolea; Antoni Grau

    2013-01-01

    Simultaneous Location and Mapping (SLAM) is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D) Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hyp...

  20. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    Science.gov (United States)

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-04-14

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  1. Fiber optic coherent laser radar 3d vision system

    International Nuclear Information System (INIS)

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-01-01

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  2. 3-D vision and figure-ground separation by visual cortex.

    Science.gov (United States)

    Grossberg, S

    1994-01-01

    A neural network theory of three-dimensional (3-D) vision, called FACADE theory, is described. The theory proposes a solution of the classical figure-ground problem for biological vision. It does so by suggesting how boundary representations and surface representations are formed within a boundary contour system (BCS) and a feature contour system (FCS). The BCS and FCS interact reciprocally to form 3-D boundary and surface representations that are mutually consistent. Their interactions generate 3-D percepts wherein occluding and occluded object parts are separated, completed, and grouped. The theory clarifies how preattentive processes of 3-D perception and figure-ground separation interact reciprocally with attentive processes of spatial localization, object recognition, and visual search. A new theory of stereopsis is proposed that predicts how cells sensitive to multiple spatial frequencies, disparities, and orientations are combined by context-sensitive filtering, competition, and cooperation to form coherent BCS boundary segmentations. Several factors contribute to figure-ground pop-out, including: boundary contrast between spatially contiguous boundaries, whether due to scenic differences in luminance, color, spatial frequency, or disparity; partially ordered interactions from larger spatial scales and disparities to smaller scales and disparities; and surface filling-in restricted to regions surrounded by a connected boundary. Phenomena such as 3-D pop-out from a 2-D picture, Da Vinci stereopsis, 3-D neon color spreading, completion of partially occluded objects, and figure-ground reversals are analyzed. The BCS and FCS subsystems model aspects of how the two parvocellular cortical processing streams that join the lateral geniculate nucleus to prestriate cortical area V4 interact to generate a multiplexed representation of Form-And-Color-And-DEpth, or FACADE, within area V4. Area V4 is suggested to support figure-ground separation and to interact with

  3. Multi-Camera and Structured-Light Vision System (MSVS for Dynamic High-Accuracy 3D Measurements of Railway Tunnels

    Directory of Open Access Journals (Sweden)

    Dong Zhan

    2015-04-01

    Full Text Available Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS. First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  4. Comparison of the Lea Symbols and HOTV charts for preschool vision screening from age 3 to 4.5 years old

    Directory of Open Access Journals (Sweden)

    Ya-Hui Zhang

    2014-12-01

    Full Text Available AIM: To evaluate the applicability and the development of the normal visual acuity from age 3 to 3.5 years using Lea Symbols and HOTV charts.METHODS: It was a survey research study. Totally, 133 preschoolers(266 eyesbetween 3 to 4.5 years old recruited from two kid-gardens in Guangzhou were tested with both the Lea Symbols chart and the HOTV chart. Outcome measures were monocular logarithm of the minimum angle of resolution(logMARvisual acuity and inter-eye acuity difference in logMAR units for each test. RESULTS: The testability rates of the two charts were high(96.24% vs 92.48%, respectively, but difference was not statistically significant(P>0.05. The difference between the two kind of monocular vision was not statistically significant(the right eye: t=0.517, P=0.606; the left eye: t=-0.618, P=0.538. There was no significant difference between different eye(Lea Symbols chart: t=0.638, P=0.525; HOTV chart: t=-0.897, P=0.372. The visual acuities of the boys were better than those of the girls, but the difference was not statistically significant(P>0.05. The results which came from visual acuities with the two charts for the corresponding age groups(3-year-old group, 3.5-year-old group, 4-year-old group, 4.5-year-old groupindicated that the visual acuities of the preschoolers were improving with increasing age, but the difference among the visual acuities with the Lea Symbols chart was not statistically significant(the right eye: F=2.662, P=0.052; the left eye: F=1.850, P=0.143. However the difference among the visual acuities with the HOTV chart was statistically significant(the right eye: F=4.518, P=0.005; the left eye: F=3.893, P=0.011.CONCLUSION: Both Lea Symbols and HOTV chars are accepted and appropriate for preschool vision screening from 3 to 4.5 years old. The monocular visual acuity of preschoolers from age 3 to 4.5 years could be assessed was similar using the two charts. There is no correlation between visual acuity and different eye

  5. Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    Science.gov (United States)

    Celik, Koray

    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.

  6. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse-Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  7. Computer vision and machine learning with RGB-D sensors

    CERN Document Server

    Shao, Ling; Kohli, Pushmeet

    2014-01-01

    This book presents an interdisciplinary selection of cutting-edge research on RGB-D based computer vision. Features: discusses the calibration of color and depth cameras, the reduction of noise on depth maps and methods for capturing human performance in 3D; reviews a selection of applications which use RGB-D information to reconstruct human figures, evaluate energy consumption and obtain accurate action classification; presents an approach for 3D object retrieval and for the reconstruction of gas flow from multiple Kinect cameras; describes an RGB-D computer vision system designed to assist t

  8. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    Directory of Open Access Journals (Sweden)

    Chien-Lun Hou

    2011-02-01

    Full Text Available In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  9. Monocular tool control, eye dominance, and laterality in New Caledonian crows.

    Science.gov (United States)

    Martinho, Antone; Burns, Zackory T; von Bayern, Auguste M P; Kacelnik, Alex

    2014-12-15

    Tool use, though rare, is taxonomically widespread, but morphological adaptations for tool use are virtually unknown. We focus on the New Caledonian crow (NCC, Corvus moneduloides), which displays some of the most innovative tool-related behavior among nonhumans. One of their major food sources is larvae extracted from burrows with sticks held diagonally in the bill, oriented with individual, but not species-wide, laterality. Among possible behavioral and anatomical adaptations for tool use, NCCs possess unusually wide binocular visual fields (up to 60°), suggesting that extreme binocular vision may facilitate tool use. Here, we establish that during natural extractions, tool tips can only be viewed by the contralateral eye. Thus, maintaining binocular view of tool tips is unlikely to have selected for wide binocular fields; the selective factor is more likely to have been to allow each eye to see far enough across the midsagittal line to view the tool's tip monocularly. Consequently, we tested the hypothesis that tool side preference follows eye preference and found that eye dominance does predict tool laterality across individuals. This contrasts with humans' species-wide motor laterality and uncorrelated motor-visual laterality, possibly because bill-held tools are viewed monocularly and move in concert with eyes, whereas hand-held tools are visible to both eyes and allow independent combinations of eye preference and handedness. This difference may affect other models of coordination between vision and mechanical control, not necessarily involving tools. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. The minimum test battery to screen for binocular vision anomalies: report 3 of the BAND study.

    Science.gov (United States)

    Hussaindeen, Jameel Rizwana; Rakshit, Archayeeta; Singh, Neeraj Kumar; Swaminathan, Meenakshi; George, Ronnie; Kapur, Suman; Scheiman, Mitchell; Ramani, Krishna Kumar

    2018-03-01

    This study aims to report the minimum test battery needed to screen non-strabismic binocular vision anomalies (NSBVAs) in a community set-up. When large numbers are to be screened we aim to identify the most useful test battery when there is no opportunity for a more comprehensive and time-consuming clinical examination. The prevalence estimates and normative data for binocular vision parameters were estimated from the Binocular Vision Anomalies and Normative Data (BAND) study, following which cut-off estimates and receiver operating characteristic curves to identify the minimum test battery have been plotted. In the receiver operating characteristic phase of the study, children between nine and 17 years of age were screened in two schools in the rural arm using the minimum test battery, and the prevalence estimates with the minimum test battery were found. Receiver operating characteristic analyses revealed that near point of convergence with penlight and red filter (> 7.5 cm), monocular accommodative facility ( 1.25 prism dioptres) were significant factors with cut-off values for best sensitivity and specificity. This minimum test battery was applied to a cohort of 305 children. The mean (standard deviation) age of the subjects was 12.7 (two) years with 121 males and 184 females. Using the minimum battery of tests obtained through the receiver operating characteristic analyses, the prevalence of NSBVAs was found to be 26 per cent. Near point of convergence with penlight and red filter > 10 cm was found to have the highest sensitivity (80 per cent) and specificity (73 per cent) for the diagnosis of convergence insufficiency. For the diagnosis of accommodative infacility, monocular accommodative facility with a cut-off of less than seven cycles per minute was the best predictor for screening (92 per cent sensitivity and 90 per cent specificity). The minimum test battery of near point of convergence with penlight and red filter, difference between distance and near

  11. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  12. Functional vision loss: a diagnosis of exclusion.

    Science.gov (United States)

    Villegas, Rex B; Ilsen, Pauline F

    2007-10-01

    Most cases of visual acuity or visual field loss can be attributed to ocular pathology or ocular manifestations of systemic pathology. They can also occasionally be attributed to nonpathologic processes or malingering. Functional vision loss is any decrease in vision the origin of which cannot be attributed to a pathologic or structural abnormality. Two cases of functional vision loss are described. In the first, a 58-year-old man presented for a baseline eye examination for enrollment in a vision rehabilitation program. He reported bilateral blindness since a motor vehicle accident with head trauma 4 years prior. Entering visual acuity was "no light perception" in each eye. Ocular health examination was normal and the patient made frequent eye contact with the examiners. He was referred for neuroimaging and electrophysiologic testing. The second case was a 49-year-old man who presented with a long history of intermittent monocular diplopia. His medical history was significant for psycho-medical evaluations and a diagnosis of factitious disorder. Entering uncorrected visual acuities were 20/20 in each eye, but visual field testing found constriction. No abnormalities were found that could account for the monocular diplopia or visual field deficit. A diagnosis of functional vision loss secondary to factitious disorder was made. Functional vision loss is a diagnosis of exclusion. In the event of reduced vision in the context of a normal ocular health examination, all other pathology must be ruled out before making the diagnosis of functional vision loss. Evaluation must include auxiliary ophthalmologic testing, neuroimaging of the visual pathway, review of the medical history and lifestyle, and psychiatric evaluation. Comanagement with a psychiatrist is essential for patients with functional vision loss.

  13. Panoramic 3d Vision on the ExoMars Rover

    Science.gov (United States)

    Paar, G.; Griffiths, A. D.; Barnes, D. P.; Coates, A. J.; Jaumann, R.; Oberst, J.; Gao, Y.; Ellery, A.; Li, R.

    The Pasteur payload on the ESA ExoMars Rover 2011/2013 is designed to search for evidence of extant or extinct life either on or up to ˜2 m below the surface of Mars. The rover will be equipped by a panoramic imaging system to be developed by a UK, German, Austrian, Swiss, Italian and French team for visual characterization of the rover's surroundings and (in conjunction with an infrared imaging spectrometer) remote detection of potential sample sites. The Panoramic Camera system consists of a wide angle multispectral stereo pair with 65° field-of-view (WAC; 1.1 mrad/pixel) and a high resolution monoscopic camera (HRC; current design having 59.7 µrad/pixel with 3.5° field-of-view) . Its scientific goals and operational requirements can be summarized as follows: • Determination of objects to be investigated in situ by other instruments for operations planning • Backup and Support for the rover visual navigation system (path planning, determination of subsequent rover positions and orientation/tilt within the 3d environment), and localization of the landing site (by stellar navigation or by combination of orbiter and ground panoramic images) • Geological characterization (using narrow band geology filters) and cartography of the local environments (local Digital Terrain Model or DTM). • Study of atmospheric properties and variable phenomena near the Martian surface (e.g. aerosol opacity, water vapour column density, clouds, dust devils, meteors, surface frosts,) 1 • Geodetic studies (observations of Sun, bright stars, Phobos/Deimos). The performance of 3d data processing is a key element of mission planning and scientific data analysis. The 3d Vision Team within the Panoramic Camera development Consortium reports on the current status of development, consisting of the following items: • Hardware Layout & Engineering: The geometric setup of the system (location on the mast & viewing angles, mutual mounting between WAC and HRC) needs to be optimized w

  14. INTEGRATION OF VIDEO IMAGES AND CAD WIREFRAMES FOR 3D OBJECT LOCALIZATION

    Directory of Open Access Journals (Sweden)

    R. A. Persad

    2012-07-01

    Full Text Available The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT. Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  15. Neuroimaging of amblyopia and binocular vision: a review.

    Science.gov (United States)

    Joly, Olivier; Frankó, Edit

    2014-01-01

    Amblyopia is a cerebral visual impairment considered to derive from abnormal visual experience (e.g., strabismus, anisometropia). Amblyopia, first considered as a monocular disorder, is now often seen as a primarily binocular disorder resulting in more and more studies examining the binocular deficits in the patients. The neural mechanisms of amblyopia are not completely understood even though they have been investigated with electrophysiological recordings in animal models and more recently with neuroimaging techniques in humans. In this review, we summarize the current knowledge about the brain regions that underlie the visual deficits associated with amblyopia with a focus on binocular vision using functional magnetic resonance imaging. The first studies focused on abnormal responses in the primary and secondary visual areas whereas recent evidence shows that there are also deficits at higher levels of the visual pathways within the parieto-occipital and temporal cortices. These higher level areas are part of the cortical network involved in 3D vision from binocular cues. Therefore, reduced responses in these areas could be related to the impaired binocular vision in amblyopic patients. Promising new binocular treatments might at least partially correct the activation in these areas. Future neuroimaging experiments could help to characterize the brain response changes associated with these treatments and help devise them.

  16. An active robot vision system for real-time 3-D structure recovery

    Energy Technology Data Exchange (ETDEWEB)

    Juvin, D. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d`Electronique et d`Instrumentation Nucleaire; Boukir, S.; Chaumette, F.; Bouthemy, P. [Rennes-1 Univ., 35 (France)

    1993-10-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up.

  17. An active robot vision system for real-time 3-D structure recovery

    International Nuclear Information System (INIS)

    Juvin, D.

    1993-01-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up

  18. Monocular depth effects on perceptual fading.

    Science.gov (United States)

    Hsu, Li-Chuan; Kramer, Peter; Yeh, Su-Ling

    2010-08-06

    After prolonged viewing, a static target among moving non-targets is perceived to repeatedly disappear and reappear. An uncrossed stereoscopic disparity of the target facilitates this Motion-Induced Blindness (MIB). Here we test whether monocular depth cues can affect MIB too, and whether they can also affect perceptual fading in static displays. Experiment 1 reveals an effect of interposition: more MIB when the target appears partially covered by, than when it appears to cover, its surroundings. Experiment 2 shows that the effect is indeed due to interposition and not to the target's contours. Experiment 3 induces depth with the watercolor illusion and replicates Experiment 1. Experiments 4 and 5 replicate Experiments 1 and 3 without the use of motion. Since almost any stimulus contains a monocular depth cue, we conclude that perceived depth affects perceptual fading in almost any stimulus, whether dynamic or static. Copyright 2010 Elsevier Ltd. All rights reserved.

  19. Grey and white matter changes in children with monocular amblyopia: voxel-based morphometry and diffusion tensor imaging study.

    Science.gov (United States)

    Li, Qian; Jiang, Qinying; Guo, Mingxia; Li, Qingji; Cai, Chunquan; Yin, Xiaohui

    2013-04-01

    To investigate the potential morphological alterations of grey and white matter in monocular amblyopic children using voxel-based morphometry (VBM) and diffusion tensor imaging (DTI). A total of 20 monocular amblyopic children and 20 age-matched controls were recruited. Whole-brain MRI scans were performed after a series of ophthalmologic exams. The imaging data were processed and two-sample t-tests were employed to identify group differences in grey matter volume (GMV), white matter volume (WMV) and fractional anisotropy (FA). After image screening, there were 12 amblyopic participants and 15 normal controls qualified for the VBM analyses. For DTI analysis, 14 amblyopes and 14 controls were included. Compared to the normal controls, reduced GMVs were observed in the left inferior occipital gyrus, the bilateral parahippocampal gyrus and the left supramarginal/postcentral gyrus in the monocular amblyopic group, with the lingual gyrus presenting augmented GMV. Meanwhile, WMVs reduced in the left calcarine, the bilateral inferior frontal and the right precuneus areas, and growth in the WMVs was seen in the right cuneus, right middle occipital and left orbital frontal areas. Diminished FA values in optic radiation and increased FA in the left middle occipital area and right precuneus were detected in amblyopic patients. In monocular amblyopia, cortices related to spatial vision underwent volume loss, which provided neuroanatomical evidence of stereoscopic defects. Additionally, white matter development was also hindered due to visual defects in amblyopes. Growth in the GMVs, WMVs and FA in the occipital lobe and precuneus may reflect a compensation effect by the unaffected eye in monocular amblyopia.

  20. Separating monocular and binocular neural mechanisms mediating chromatic contextual interactions.

    Science.gov (United States)

    D'Antona, Anthony D; Christiansen, Jens H; Shevell, Steven K

    2014-04-17

    When seen in isolation, a light that varies in chromaticity over time is perceived to oscillate in color. Perception of that same time-varying light may be altered by a surrounding light that is also temporally varying in chromaticity. The neural mechanisms that mediate these contextual interactions are the focus of this article. Observers viewed a central test stimulus that varied in chromaticity over time within a larger surround that also varied in chromaticity at the same temporal frequency. Center and surround were presented either to the same eye (monocular condition) or to opposite eyes (dichoptic condition) at the same frequency (3.125, 6.25, or 9.375 Hz). Relative phase between center and surround modulation was varied. In both the monocular and dichoptic conditions, the perceived modulation depth of the central light depended on the relative phase of the surround. A simple model implementing a linear combination of center and surround modulation fit the measurements well. At the lowest temporal frequency (3.125 Hz), the surround's influence was virtually identical for monocular and dichoptic conditions, suggesting that at this frequency, the surround's influence is mediated primarily by a binocular neural mechanism. At higher frequencies, the surround's influence was greater for the monocular condition than for the dichoptic condition, and this difference increased with temporal frequency. Our findings show that two separate neural mechanisms mediate chromatic contextual interactions: one binocular and dominant at lower temporal frequencies and the other monocular and dominant at higher frequencies (6-10 Hz).

  1. 77 FR 75494 - Qualification of Drivers; Exemption Applications; Vision

    Science.gov (United States)

    2012-12-20

    ... Multiple Regression Analysis of a Poisson Process,'' Journal of American Statistical Association, June 1971... 14 applicants' case histories. The 14 individuals applied for exemptions from the vision requirement... apply the principle to monocular drivers, because data from the Federal Highway Administration's (FHWA...

  2. Effect of Monocular Deprivation on Rabbit Neural Retinal Cell Densities

    OpenAIRE

    Mwachaka, Philip Maseghe; Saidi, Hassan; Odula, Paul Ochieng; Mandela, Pamela Idenya

    2015-01-01

    Purpose: To describe the effect of monocular deprivation on densities of neural retinal cells in rabbits. Methods: Thirty rabbits, comprised of 18 subject and 12 control animals, were included and monocular deprivation was achieved through unilateral lid suturing in all subject animals. The rabbits were observed for three weeks. At the end of each week, 6 experimental and 3 control animals were euthanized, their retinas was harvested and processed for light microscopy. Photomicrographs of ...

  3. Self-supervised learning as an enabling technology for future space exploration robots: ISS experiments on monocular distance learning

    Science.gov (United States)

    van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario

    2017-11-01

    Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.

  4. A Behaviour-Based Architecture for Mapless Navigation Using Vision

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Guzel

    2012-04-01

    Full Text Available Autonomous robots operating in an unknown and uncertain environment must be able to cope with dynamic changes to that environment. For a mobile robot in a cluttered environment to navigate successfully to a goal while avoiding obstacles is a challenging problem. This paper presents a new behaviour-based architecture design for mapless navigation. The architecture is composed of several modules and each module generates behaviours. A novel method, inspired from a visual homing strategy, is adapted to a monocular vision-based system to overcome goal-based navigation problems. A neural network-based obstacle avoidance strategy is designed using a 2-D scanning laser. To evaluate the performance of the proposed architecture, the system has been tested using Microsoft Robotics Studio (MRS, which is a very powerful 3D simulation environment. In addition, real experiments to guide a Pioneer 3-DX mobile robot, equipped with a pan-tilt-zoom camera in a cluttered environment are presented. The analysis of the results allows us to validate the proposed behaviour-based navigation strategy.

  5. 3D Vision Based Landing Control of a Small Scale Autonomous Helicopter

    Directory of Open Access Journals (Sweden)

    Zhenyu Yu

    2007-03-01

    Full Text Available Autonomous landing is a challenging but important task for Unmanned Aerial Vehicles (UAV to achieve high level of autonomy. The fundamental requirement for landing is the knowledge of the height above the ground, and a properly designed controller to govern the process. This paper presents our research results in the study of landing an autonomous helicopter. The above-the-ground height sensing is based on a 3D vision system. We have designed a simple plane-fitting method for estimating the height over the ground. The method enables vibration free measurement with the camera rigidly attached on the helicopter without using complicated gimbal or active vision mechanism. The estimated height is used by the landing control loop. Considering the ground effect during landing, we have proposed a two-stage landing procedure. Two controllers are designed for the two landing stages respectively. The sensing approach and control strategy has been verified in field flight test and has demonstrated satisfactory performance.

  6. Neuroimaging of amblyopia and binocular vision: a review

    Directory of Open Access Journals (Sweden)

    Olivier eJoly

    2014-08-01

    Full Text Available Amblyopia is a cerebral visual impairment considered to derive from abnormal visual experience (e.g., strabismus, anisometropia. Amblyopia, first considered as a monocular disorder, is now often seen as a primarily binocular disorder resulting in more and more studies examining the binocular deficits in the patients. The neural mechanisms of amblyopia are not completely understood even though they have been investigated with electrophysiological recordings in animal models and more recently with neuroimaging techniques in humans. In this review, we summarise the current knowledge about the brain regions that underlie the visual deficits associated with amblyopia with a focus on binocular vision using functional magnetic resonance imaging (fMRI. The first studies focused on abnormal responses in the primary and secondary visual areas whereas recent evidence show that there are also deficits at higher levels of the visual pathways within the parieto-occipital and temporal cortices. These higher level areas are part of the cortical network involved in 3D vision from binocular cues. Therefore, reduced responses in these areas could be related to the impaired binocular vision in amblyopic patients. Promising new binocular treatments might at least partially correct the activation in these areas. Future neuroimaging experiments could help to characterise the brain response changes associated with these treatments and help devise them.

  7. Long-Term Visual Training Increases Visual Acuity and Long-Term Monocular Deprivation Promotes Ocular Dominance Plasticity in Adult Standard Cage-Raised Mice.

    Science.gov (United States)

    Hosang, Leon; Yusifov, Rashad; Löwel, Siegrid

    2018-01-01

    For routine behavioral tasks, mice predominantly rely on olfactory cues and tactile information. In contrast, their visual capabilities appear rather restricted, raising the question whether they can improve if vision gets more behaviorally relevant. We therefore performed long-term training using the visual water task (VWT): adult standard cage (SC)-raised mice were trained to swim toward a rewarded grating stimulus so that using visual information avoided excessive swimming toward nonrewarded stimuli. Indeed, and in contrast to old mice raised in a generally enriched environment (Greifzu et al., 2016), long-term VWT training increased visual acuity (VA) on average by more than 30% to 0.82 cycles per degree (cyc/deg). In an individual animal, VA even increased to 1.49 cyc/deg, i.e., beyond the rat range of VAs. Since visual experience enhances the spatial frequency threshold of the optomotor (OPT) reflex of the open eye after monocular deprivation (MD), we also quantified monocular vision after VWT training. Monocular VA did not increase reliably, and eye reopening did not initiate a decline to pre-MD values as observed by optomotry; VA values rather increased by continued VWT training. Thus, optomotry and VWT measure different parameters of mouse spatial vision. Finally, we tested whether long-term MD induced ocular dominance (OD) plasticity in the visual cortex of adult [postnatal day (P)162-P182] SC-raised mice. This was indeed the case: 40-50 days of MD induced OD shifts toward the open eye in both VWT-trained and, surprisingly, also in age-matched mice without VWT training. These data indicate that (1) long-term VWT training increases adult mouse VA, and (2) long-term MD induces OD shifts also in adult SC-raised mice.

  8. Restoration of binocular vision in amblyopia.

    Science.gov (United States)

    Hess, R F; Mansouri, B; Thompson, B

    2011-09-01

    To develop a treatment for amblyopia based on re-establishing binocular vision. A novel procedure is outlined for measuring and reducing the extent to which the fixing eye suppresses the fellow amblyopic eye in adults with amblyopia. We hypothesize that suppression renders a structurally binocular system, functionally monocular. We demonstrate that strabismic amblyopes can combine information normally between their eyes under viewing conditions where suppression is reduced by presenting stimuli of different contrast to each eye. Furthermore we show that prolonged periods of binocular combination leads to a strengthening of binocular vision in strabismic amblyopes and eventual combination of binocular information under natural viewing conditions (stimuli of the same contrast in each eye). Concomitant improvement in monocular acuity of the amblyopic eye occurs with this reduction in suppression and strengthening of binocular fusion. Additionally, stereoscopic function was established in the majority of patients tested. We have implemented this approach on a headmounted device as well as on a handheld iPod. This provides the basis for a new treatment of amblyopia, one that is purely binocular and aimed at reducing suppression as a first step.

  9. Improvements in clinical and functional vision and perceived visual disability after first and second eye cataract surgery

    OpenAIRE

    Elliott, D.; Patla, A.; Bullimore, M.

    1997-01-01

    AIMS—To determine the improvements in clinical and functional vision and perceived visual disability after first and second eye cataract surgery.
METHODS—Clinical vision (monocular and binocular high and low contrast visual acuity, contrast sensitivity, and disability glare), functional vision (face identity and expression recognition, reading speed, word acuity, and mobility orientation), and perceived visual disability (Activities of Daily Vision Scale) were measured in 25 subjects before a...

  10. 3DSEM: A 3D microscopy dataset

    Directory of Open Access Journals (Sweden)

    Ahmad P. Tafti

    2016-03-01

    Full Text Available The Scanning Electron Microscope (SEM as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples. Keywords: 3D microscopy dataset, 3D microscopy vision, 3D SEM surface reconstruction, Scanning Electron Microscope (SEM

  11. A method of real-time detection for distant moving obstacles by monocular vision

    Science.gov (United States)

    Jia, Bao-zhi; Zhu, Ming

    2013-12-01

    In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.

  12. Correlations of memory and learning with vision in aged patients before and after a cataract operation.

    Science.gov (United States)

    Fagerström, R

    1992-12-01

    The connection between memory and learning with vision was investigated by studying 100 cataract operation patients, aged 71 to 76 years, 25 of them being men and 75 women. The cataract operation restored sufficient acuity of vision for reading (minimum E-test value 0.40) to 79% of the subjects. Short-term memory was studied with series of numbers, homogenic and heterogenic inhibition, and long sentences. Learning was tested with paired-associate learning and word learning. Psychological symptoms were measured on the Brief Psychiatric Rating Scale and personality on the Mini-Mult MMPI. Memory and learning improved significantly when vision was normalized after the cataract operation. Poor memory and learning scores correlated with monocular vision before the operation and with defects in the field of vision, due to glaucoma and exceeding 20%, postsurgery. Monocular vision and defects in the visual field caused a continuous sense of abnormalness, which impaired old people's ability to concentrate on tasks of memory and learning. Cerebrovascular disturbances, beginning dementia, and moderate psychological symptoms obstructed memory and learning on both test rounds. Depression was the most important psychological symptom contributing to poor memory and learning scores after the cataract operation. The memory and learning defects mainly reflected disturbances in memorizing.

  13. Ergonomic evaluation of ubiquitous computing with monocular head-mounted display

    Science.gov (United States)

    Kawai, Takashi; Häkkinen, Jukka; Yamazoe, Takashi; Saito, Hiroko; Kishi, Shinsuke; Morikawa, Hiroyuki; Mustonen, Terhi; Kaistinen, Jyrki; Nyman, Göte

    2010-01-01

    In this paper, the authors conducted an experiment to evaluate the UX in an actual outdoor environment, assuming the casual use of monocular HMD to view video content while short walking. In conducting the experiment, eight subjects were asked to view news videos on a monocular HMD while walking through a large shopping mall. Two types of monocular HMDs and a hand-held media player were used, and the psycho-physiological responses of the subjects were measured before, during, and after the experiment. The VSQ, SSQ and NASA-TLX were used to assess the subjective workloads and symptoms. The objective indexes were heart rate and stride and a video recording of the environment in front of the subject's face. The results revealed differences between the two types of monocular HMDs as well as between the monocular HMDs and other conditions. Differences between the types of monocular HMDs may have been due to screen vibration during walking, and it was considered as a major factor in the UX in terms of the workload. Future experiments to be conducted in other locations will have higher cognitive loads in order to study the performance and the situation awareness to actual and media environments.

  14. Recovery of neurofilament following early monocular deprivation

    Directory of Open Access Journals (Sweden)

    Timothy P O'Leary

    2012-04-01

    Full Text Available A brief period of monocular deprivation in early postnatal life can alter the structure of neurons within deprived-eye-receiving layers of the dorsal lateral geniculate nucleus. The modification of structure is accompanied by a marked reduction in labeling for neurofilament, a protein that composes the stable cytoskeleton and that supports neuron structure. This study examined the extent of neurofilament recovery in monocularly deprived cats that either had their deprived eye opened (binocular recovery, or had the deprivation reversed to the fellow eye (reverse occlusion. The degree to which recovery was dependent on visually-driven activity was examined by placing monocularly deprived animals in complete darkness (dark rearing. The loss of neurofilament and the reduction of soma size caused by monocular deprivation were both ameliorated equally following either binocular recovery or reverse occlusion for 8 days. Though monocularly deprived animals placed in complete darkness showed recovery of soma size, there was a generalized loss of neurofilament labeling that extended to originally non-deprived layers. Overall, these results indicate that recovery of soma size is achieved by removal of the competitive disadvantage of the deprived eye, and occurred even in the absence of visually-driven activity. Recovery of neurofilament occurred when the competitive disadvantage of the deprived eye was removed, but unlike the recovery of soma size, was dependent upon visually-driven activity. The role of neurofilament in providing stable neural structure raises the intriguing possibility that dark rearing, which reduced overall neurofilament levels, could be used to reset the deprived visual system so as to make it more ameliorable with treatment by experiential manipulations.

  15. An Innovative 3D Ultrasonic Actuator with Multidegree of Freedom for Machine Vision and Robot Guidance Industrial Applications Using a Single Vibration Ring Transducer

    Directory of Open Access Journals (Sweden)

    M. Shafik

    2013-07-01

    Full Text Available This paper presents an innovative 3D piezoelectric ultrasonic actuator using a single flexural vibration ring transducer, for machine vision and robot guidance industrial applications. The proposed actuator is principally aiming to overcome the visual spotlight focus angle of digital visual data capture transducer, digital cameras and enhance the machine vision system ability to perceive and move in 3D. The actuator Design, structures, working principles and finite element analysis are discussed in this paper. A prototype of the actuator was fabricated. Experimental tests and measurements showed the ability of the developed prototype to provide 3D motions of Multidegree of freedom, with typical speed of movement equal to 35 revolutions per minute, a resolution of less than 5μm and maximum load of 3.5 Newton. These initial characteristics illustrate, the potential of the developed 3D micro actuator to gear the spotlight focus angle issue of digital visual data capture transducers and possible improvement that such technology could bring to the machine vision and robot guidance industrial applications.

  16. Gain-scheduling control of a monocular vision-based human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-08-01

    Full Text Available , R. and Zisserman, A. (2004). Multiple View Geometry in Computer Vision. Cambridge University Press, 2nd edition. Hutchinson, S., Hager, G., and Corke, P. (1996). A tutorial on visual servo control. IEEE Trans. on Robotics and Automation, 12... environment, in a passive manner, at relatively high speeds and low cost. The control of mobile robots using vision in the feed- back loop falls into the well-studied field of visual servo control. Two primary approaches are used: image-based visual...

  17. P3-4: Binocular Visual Acuity in Exotropia

    Directory of Open Access Journals (Sweden)

    Heekyung Yang

    2012-10-01

    Full Text Available Purpose: To investigate binocular interaction of visual acuity in patients with intermittent exotropia and its relationship with accommodative responses during binocular vision. Methods: Sixty-seven patients with intermittent exotropia of 8 years or older were included. Binocular visual acuity (BVA and monocular visual acuity (MVA were measured in sequence. Accommodative responses of both eyes were measured using the WAM-5500 autorefractor/keratometer (GrandSeiko, Fukuyama, Japan during binocular and monocular viewing conditions at 6 m. Accommodative responses during binocular vision were calculated using the difference between the refractive errors of binocular and monocular vision. Main outcome measures: Binocular interactions of visual acuity were categorized as binocular summation, equivalency, or inhibition. The prevalence of the 3 patterns of binocular interaction was investigated. Accommodative responses were correlated with differences between BVA and better MVA. Results: Most patients (41 patients, 61.2% showed binocular equivalency. Binocular inhibition and summation were noted in 6 (9.0% and 20 (29.9% patients, respectively. Linear regression analysis revealed a significant correlation between binocular interaction and accommodative responses during binocular vision (p < .001. Accommodative responses significantly correlated with the angle of exodeviation at distance (p = .002. Conclusions: In patients with intermittent exotropia, binocular inhibition is associated with increased accommodation and a larger angle of exodeviation, suggesting that accommodative convergence is a mechanism that maintains ocular alignment. Thus, BVA inhibition may be attributed to diminishing fusional control in patients with intermittent exotropia.

  18. Applications of 2D to 3D conversion for educational purposes

    Science.gov (United States)

    Koido, Yoshihisa; Morikawa, Hiroyuki; Shiraishi, Saki; Takeuchi, Soya; Maruyama, Wataru; Nakagori, Toshio; Hirakata, Masataka; Shinkai, Hirohisa; Kawai, Takashi

    2013-03-01

    There are three main approaches creating stereoscopic S3D content: stereo filming using two cameras, stereo rendering of 3D computer graphics, and 2D to S3D conversion by adding binocular information to 2D material images. Although manual "off-line" conversion can control the amount of parallax flexibly, 2D material images are converted according to monocular information in most cases, and the flexibility of 2D to S3D conversion has not been exploited. If the depth is expressed flexibly, comprehensions and interests from converted S3D contents are anticipated to be differed from those from 2D. Therefore, in this study we created new S3D content for education by applying 2D to S3D conversion. For surgical education, we created S3D surgical operation content under a surgeon using a partial 2D to S3D conversion technique which was expected to concentrate viewers' attention on significant areas. And for art education, we converted Ukiyoe prints; traditional Japanese artworks made from a woodcut. The conversion of this content, which has little depth information, into S3D, is expected to produce different cognitive processes from those evoked by 2D content, e.g., the excitation of interest, and the understanding of spatial information. In addition, the effects of the representation of these contents were investigated.

  19. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    Science.gov (United States)

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  20. Generalized Hough transform based time invariant action recognition with 3D pose information

    Science.gov (United States)

    Muench, David; Huebner, Wolfgang; Arens, Michael

    2014-10-01

    Human action recognition has emerged as an important field in the computer vision community due to its large number of applications such as automatic video surveillance, content based video-search and human robot interaction. In order to cope with the challenges that this large variety of applications present, recent research has focused more on developing classifiers able to detect several actions in more natural and unconstrained video sequences. The invariance discrimination tradeoff in action recognition has been addressed by utilizing a Generalized Hough Transform. As a basis for action representation we transform 3D poses into a robust feature space, referred to as pose descriptors. For each action class a one-dimensional temporal voting space is constructed. Votes are generated from associating pose descriptors with their position in time relative to the end of an action sequence. Training data consists of manually segmented action sequences. In the detection phase valid human 3D poses are assumed as input, e.g. originating from 3D sensors or monocular pose reconstruction methods. The human 3D poses are normalized to gain view-independence and transformed into (i) relative limb-angle space to ensure independence of non-adjacent joints or (ii) geometric features. In (i) an action descriptor consists of the relative angles between limbs and their temporal derivatives. In (ii) the action descriptor consists of different geometric features. In order to circumvent the problem of time-warping we propose to use a codebook of prototypical 3D poses which is generated from sample sequences of 3D motion capture data. This idea is in accordance with the concept of equivalence classes in action space. Results of the codebook method are presented using the Kinect sensor and the CMU Motion Capture Database.

  1. Effects of lens distortion calibration patterns on the accuracy of monocular 3D measurements

    CSIR Research Space (South Africa)

    De Villiers, J

    2011-11-01

    Full Text Available choice (e.g. the open computer vision (OpenCV) library [4], Caltech Camera Calibration Toolbox [5]) as the intersections can be found extremely accurately by finding the saddle point of the intensity profile about the intersection as described... to capture and process data in order to calibrate it. A. Equipment specification A 1600-by-1200 Prosilica GE1660 Gigabit Ethernet ma- chine vision camera was mated with a Schneider Cinegon 4.8mm/f1.4 lens for use in this work. This lens has an 82...

  2. Recovering stereo vision by squashing virtual bugs in a virtual reality environment.

    Science.gov (United States)

    Vedamurthy, Indu; Knill, David C; Huang, Samuel J; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne; Levi, Dennis M

    2016-06-19

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Author(s).

  3. Recovering stereo vision by squashing virtual bugs in a virtual reality environment

    Science.gov (United States)

    Vedamurthy, Indu; Knill, David C.; Huang, Samuel J.; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne

    2016-01-01

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity—the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task—a ‘bug squashing’ game—in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269607

  4. A 3D vision approach for correction of patient pose in radiotherapy

    International Nuclear Information System (INIS)

    Chyou, T.; Meyer, J.

    2011-01-01

    Full text: To develop an approach to quantitatively determine patient surface contours as a pan of an augmented reality system for patient position and posture correction in radiotherapy. The approach is based on a 3D vision method referred to as active stereo with structured light. When a 3D object is viewed with a standard digital camera the depth information along one dimension, the axis parallel to the line of sight, is lost. With the aid of a projected structured light codification pattern, 3D coordinates of the scene can be recovered from a 2D image. Two codification strategies were examined. The spatial encoding method requires a single static pattern, thus enabling dynamic scenes to be captured. Temporal encoding methods require a set of patterns to be successively projected onto the object (see Fig. I), the encoding for each pixel is only complete when the entire series of patterns has been projected. Both methods are investigated in terms of the tradeoffs with regard to convenience, accuracy and acquisition time. The temporal method has shown high sensitivity to surface features on a human phantom even under typical office light conditions. The preliminary accuracy was in the order of millimeters at a distance of I m. The spatial encoding approach is ongoing. The most suitable approach will be integrated into the existing augmented reality system to provide a virtual surface contour of the desired patient position for visual guidance, and quantitative information of offsets between the measured and desired position.

  5. Dopamine antagonists and brief vision distinguish lens-induced- and form-deprivation-induced myopia.

    Science.gov (United States)

    Nickla, Debora L; Totonelly, Kristen

    2011-11-01

    In eyes wearing negative lenses, the D2 dopamine antagonist spiperone was only partly effective in preventing the ameliorative effects of brief periods of vision (Nickla et al., 2010), in contrast to reports from studies using form-deprivation. The present study was done to directly compare the effects of spiperone, and the D1 antagonist SCH-23390, on the two different myopiagenic paradigms. 12-day old chickens wore monocular diffusers (form-deprivation) or -10 D lenses attached to the feathers with matching rings of Velcro. Each day for 4 days, 10 μl intravitreal injections of the dopamine D2/D4 antagonist spiperone (5 nmoles) or the D1 antagonist SCH-23390, were given under isoflurane anesthesia, and the diffusers (n = 16; n = 5, respectively) or lenses (n = 20; n = 6) were removed for 2 h immediately after. Saline injections prior to vision were done as controls (form-deprivation: n = 11; lenses: n = 10). Two other saline-injected groups wore the lenses (n = 12) or diffusers (n = 4) continuously. Axial dimensions were measured by high frequency A-scan ultrasonography at the start, and on the last day immediately prior to, and 3 h after the injection. Refractive errors were measured at the end of the experiment using a Hartinger's refractometer. In form-deprived eyes, spiperone, but not SCH-23390, prevented the ocular growth inhibition normally effected by the brief periods of vision (change in vitreous chamber depth, spiperone vs saline: 322 vs 211 μm; p = 0.01). By contrast, neither had any effect on negative lens-wearing eyes given similar unrestricted vision (210 and 234 μm respectively, vs 264 μm). The increased elongation in the spiperone-injected form-deprived eyes did not, however, result in a myopic shift, probably due to the inhibitory effect of the drug on anterior chamber growth (drug vs saline: 96 vs 160 μm; p effects of brief periods of unrestricted vision differ for form-deprivation versus negative lens-wear, which may imply different growth

  6. Short-Term Monocular Deprivation Enhances Physiological Pupillary Oscillations

    Directory of Open Access Journals (Sweden)

    Paola Binda

    2017-01-01

    Full Text Available Short-term monocular deprivation alters visual perception in adult humans, increasing the dominance of the deprived eye, for example, as measured with binocular rivalry. This form of plasticity may depend upon the inhibition/excitation balance in the visual cortex. Recent work suggests that cortical excitability is reliably tracked by dilations and constrictions of the pupils of the eyes. Here, we ask whether monocular deprivation produces a systematic change of pupil behavior, as measured at rest, that is independent of the change of visual perception. During periods of minimal sensory stimulation (in the dark and task requirements (minimizing body and gaze movements, slow pupil oscillations, “hippus,” spontaneously appear. We find that hippus amplitude increases after monocular deprivation, with larger hippus changes in participants showing larger ocular dominance changes (measured by binocular rivalry. This tight correlation suggests that a single latent variable explains both the change of ocular dominance and hippus. We speculate that the neurotransmitter norepinephrine may be implicated in this phenomenon, given its important role in both plasticity and pupil control. On the practical side, our results indicate that measuring the pupil hippus (a simple and short procedure provides a sensitive index of the change of ocular dominance induced by short-term monocular deprivation, hence a proxy for plasticity.

  7. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    Science.gov (United States)

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  8. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    Directory of Open Access Journals (Sweden)

    Jin-Chun Piao

    2017-11-01

    Full Text Available Simultaneous localization and mapping (SLAM is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  9. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    Science.gov (United States)

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-01-01

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143

  10. Aerial vehicles collision avoidance using monocular vision

    Science.gov (United States)

    Balashov, Oleg; Muraviev, Vadim; Strotov, Valery

    2016-10-01

    In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.

  11. Stereo Vision and 3D Reconstruction on a Distributed Memory System

    NARCIS (Netherlands)

    Kuijpers, N.H.L.; Paar, G.; Lukkien, J.J.

    1996-01-01

    An important research topic in image processing is stereo vision. The objective is to compute a 3-dimensional representation of some scenery from two 2-dimensional digital images. Constructing a 3-dimensional representation involves finding pairs of pixels from the two images which correspond to the

  12. Effect of Monocular Deprivation on Rabbit Neural Retinal Cell Densities.

    Science.gov (United States)

    Mwachaka, Philip Maseghe; Saidi, Hassan; Odula, Paul Ochieng; Mandela, Pamela Idenya

    2015-01-01

    To describe the effect of monocular deprivation on densities of neural retinal cells in rabbits. Thirty rabbits, comprised of 18 subject and 12 control animals, were included and monocular deprivation was achieved through unilateral lid suturing in all subject animals. The rabbits were observed for three weeks. At the end of each week, 6 experimental and 3 control animals were euthanized, their retinas was harvested and processed for light microscopy. Photomicrographs of the retina were taken and imported into FIJI software for analysis. Neural retinal cell densities of deprived eyes were reduced along with increasing period of deprivation. The percentage of reductions were 60.9% (P < 0.001), 41.6% (P = 0.003), and 18.9% (P = 0.326) for ganglion, inner nuclear, and outer nuclear cells, respectively. In non-deprived eyes, cell densities in contrast were increased by 116% (P < 0.001), 52% (P < 0.001) and 59.6% (P < 0.001) in ganglion, inner nuclear, and outer nuclear cells, respectively. In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye.

  13. A low cost PSD-based monocular motion capture system

    Science.gov (United States)

    Ryu, Young Kee; Oh, Choonsuk

    2007-10-01

    This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.

  14. Role of high-order aberrations in senescent changes in spatial vision

    Energy Technology Data Exchange (ETDEWEB)

    Elliot, S; Choi, S S; Doble, N; Hardy, J L; Evans, J W; Werner, J S

    2009-01-06

    The contributions of optical and neural factors to age-related losses in spatial vision are not fully understood. We used closed-loop adaptive optics to test the visual benefit of correcting monochromatic high-order aberrations (HOAs) on spatial vision for observers ranging in age from 18-81 years. Contrast sensitivity was measured monocularly using a two-alternative forced choice (2AFC) procedure for sinusoidal gratings over 6 mm and 3 mm pupil diameters. Visual acuity was measured using a spatial 4AFC procedure. Over a 6 mm pupil, young observers showed a large benefit of AO at high spatial frequencies, whereas older observers exhibited the greatest benefit at middle spatial frequencies, plus a significantly larger increase in visual acuity. When age-related miosis is controlled, young and old observers exhibited a similar benefit of AO for spatial vision. An increase in HOAs cannot account for the complete senescent decline in spatial vision. These results may indicate a larger role of additional optical factors when the impact of HOAs is removed, but also lend support for the importance of neural factors in age-related changes in spatial vision.

  15. Unexpected course of treatment in a thirteen-year-old boy with unilateral vision disorder

    International Nuclear Information System (INIS)

    Dorobisz, A.T.; Rucinski, A.; Ujma-Czapska, B.; Grotowska, M.; Zaleska-Dorobisz, U.

    2005-01-01

    Monocular vision disorder is a characteristic symptom of brain ischemia of the second to fourth stage of Milikan's scale. Partial occipital epilepsy, where optic symptoms are usually bilateral and convulsion or automatisms appear later, is seldom the cause. In this paper the authors present a special case of a vision disorder in a 13-year-old boy in whom kinking of the internal carotid artery (ICA) was recognized in the first stage of diagnostic examination and treatment as the basic cause of the disorder. Other causes were not affirmed. Surgical angioplasty of the carotid artery was performed with the restoration of proper circulation. Despite the treatment, the symptoms persisted. In the further course of the disease, after repeated diagnostic imaging, childhood epilepsy with occipital spikes was recognized as the cause of the monocular anopsy. All the symptoms disappeared after pharmacological treatment. The surgeon must prove that the neurological symptoms are caused by kinking of the internal carotid artery and that no other etiology exists. (author)

  16. Inverse problems in vision and 3D tomography

    CERN Document Server

    Mohamad-Djafari, Ali

    2013-01-01

    The concept of an inverse problem is a familiar one to most scientists and engineers, particularly in the field of signal and image processing, imaging systems (medical, geophysical, industrial non-destructive testing, etc.) and computer vision. In imaging systems, the aim is not just to estimate unobserved images, but also their geometric characteristics from observed quantities that are linked to these unobserved quantities through the forward problem. This book focuses on imagery and vision problems that can be clearly written in terms of an inverse problem where an estimate for the image a

  17. Acquisition And Processing Of Range Data Using A Laser Scanner-Based 3-D Vision System

    Science.gov (United States)

    Moring, I.; Ailisto, H.; Heikkinen, T.; Kilpela, A.; Myllyla, R.; Pietikainen, M.

    1988-02-01

    In our paper we describe a 3-D vision system designed and constructed at the Technical Research Centre of Finland in co-operation with the University of Oulu. The main application fields our 3-D vision system was developed for are geometric measurements of large objects and manipulator and robot control tasks. It seems to be potential in automatic vehicle guidance applications, too. The system has now been operative for about one year and its performance has been extensively tested. Recently we have started a field test phase to evaluate its performance in real industrial tasks and environments. The system consists of three main units: the range finder, the scanner and the computer. The range finder is based on the direct measurement of the time-of-flight of a laser pulse. The time-interval between the transmitted and the received light pulses is converted into a continuous analog voltage, which is amplified, filtered and offset-corrected to produce the range information. The scanner consists of two mirrors driven by moving iron galvanometers. This system is controlled by servo amplifiers. The computer unit controls the scanner, transforms the measured coordinates into a cartesian coordinate system and serves as a user interface and postprocessing environment. Methods for segmenting the range image into a higher level description have been developed. The description consists of planar and curved surfaces and their features and relations. Parametric surface representations based on the Ferguson surface patch are studied, too.

  18. Hierarchical online appearance-based tracking for 3D head pose, eyebrows, lips, eyelids, and irises

    NARCIS (Netherlands)

    Orozco, Javier; Rudovic, Ognjen; Gonzalez Garcia, Jordi; Pantic, Maja

    In this paper, we propose an On-line Appearance-Based Tracker (OABT) for simultaneous tracking of 3D head pose, lips, eyebrows, eyelids and irises in monocular video sequences. In contrast to previously proposed tracking approaches, which deal with face and gaze tracking separately, our OABT can

  19. On so-called paradoxical monocular stereoscopy.

    Science.gov (United States)

    Koenderink, J J; van Doorn, A J; Kappers, A M

    1994-01-01

    Human observers are apparently well able to judge properties of 'three-dimensional objects' on the basis of flat pictures such as photographs of physical objects. They obtain this 'pictorial relief' without much conscious effort and with little interference from the (flat) picture surface. Methods for 'magnifying' pictorial relief from single pictures include viewing instructions as well as a variety of monocular and binocular 'viewboxes'. Such devices are reputed to yield highly increased pictorial depth, though no methodologies for the objective verification of such claims exist. A binocular viewbox has been reconstructed and pictorial relief under monocular, 'synoptic', and natural binocular viewing is described. The results corroborate and go beyond early introspective reports and turn out to pose intriguing problems for modern research.

  20. The New Realm of 3-D Vision

    Science.gov (United States)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  1. 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach

    Directory of Open Access Journals (Sweden)

    Michiel Vlaminck

    2016-11-01

    Full Text Available In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind, and hence, it does not make any assumption about the scene or about the sensor set-up. The main novelty of this work is that the proposed LiDAR mapping approach deals explicitly with the inhomogeneous density of point clouds produced by LiDAR scanners. To this end, we keep track of a global 3D map of the environment, which is continuously improved and refined by means of a surface reconstruction technique. Moreover, we perform surface analysis on consecutive generated point clouds in order to assure a perfect alignment with the global 3D map. In order to cope with drift, the system incorporates loop closure by determining the pose error and propagating it back in the pose graph. Our algorithm was exhaustively tested on data captured at a conference building, a university campus and an industrial site of a chemical company. Experiments demonstrate that it is capable of generating highly accurate 3D maps in very challenging environments. We can state that the average distance of corresponding point pairs between the ground truth and estimated point cloud approximates one centimeter for an area covering approximately 4000 m 2 . To prove the genericity of the system, it was tested on the well-known Kitti vision benchmark. The results show that our approach competes with state of the art methods without making any additional assumptions.

  2. Dichoptic training in adults with amblyopia: Additional stereoacuity gains over monocular training.

    Science.gov (United States)

    Liu, Xiang-Yun; Zhang, Jun-Yun

    2017-08-04

    Dichoptic training is a recent focus of research on perceptual learning in adults with amblyopia, but whether and how dichoptic training is superior to traditional monocular training is unclear. Here we investigated whether dichoptic training could further boost visual acuity and stereoacuity in monocularly well-trained adult amblyopic participants. During dichoptic training the participants used the amblyopic eye to practice a contrast discrimination task, while a band-filtered noise masker was simultaneously presented in the non-amblyopic fellow eye. Dichoptic learning was indexed by the increase of maximal tolerable noise contrast for successful contrast discrimination in the amblyopic eye. The results showed that practice tripled maximal tolerable noise contrast in 13 monocularly well-trained amblyopic participants. Moreover, the training further improved stereoacuity by 27% beyond the 55% gain from previous monocular training, but unchanged visual acuity of the amblyopic eyes. Therefore our dichoptic training method may produce extra gains of stereoacuity, but not visual acuity, in adults with amblyopia after monocular training. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Can the Farnsworth D15 Color Vision Test Be Defeated through Practice?

    Science.gov (United States)

    Ng, Jason S; Liem, Sophia C

    2018-05-01

    This study suggests that it is possible for some patients with severe red-green color vision deficiency to do perfectly on the Farnsworth D15 test after practicing it. The Farnsworth D15 is a commonly used test to qualify people for certain occupations. For patients with color vision deficiency, there may be high motivation to try to pass the test through practice to gain entry into a particular occupation. There is no evidence in the literature on whether it is possible for patients to learn to pass the D15 test through practice. Ten subjects with inherited red-green color vision deficiency and 15 color-normal subjects enrolled in the study. All subjects had anomaloscope testing, color vision book tests, and a Farnsworth D15 at an initial visit. For the D15, the number of major crossovers was determined for each subject. Failing the D15 was determined as greater than 1 major crossover. Subjects with color vision deficiency practiced the D15 as long as desired to achieve a perfect score and then returned for a second visit for D15 testing. A paired t test was used to analyze the number of major crossovers at visit 1 versus visit 2. Color-normal subjects did not have any major crossovers. Subjects with color vision deficiency had significantly (P color vision deficiency, and this should be considered in certain cases where occupational entry is dependent on D15 testing.

  4. Decrease in monocular sleep after sleep deprivation in the domestic chicken

    NARCIS (Netherlands)

    Boerema, AS; Riedstra, B; Strijkstra, AM

    2003-01-01

    We investigated the trade-off between sleep need and alertness, by challenging chickens to modify their monocular sleep. We sleep deprived domestic chickens (Gallus domesticus) to increase their sleep need. We found that in response to sleep deprivation the fraction of monocular sleep within sleep

  5. 3-D model-based vehicle tracking.

    Science.gov (United States)

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  6. Aquilion ONE / ViSION Edition CT scanner realizing 3D dynamic observation with low-dose scanning

    International Nuclear Information System (INIS)

    Kazama, Masahiro; Saito, Yasuo

    2015-01-01

    Computed tomography (CT) scanners have been continuously advancing as essential diagnostic imaging equipment for the diagnosis and treatment of a variety of diseases, including the three major disease classes of cerebrovascular disease, cardiovascular disease, and cancer. Through the development of helical CT scanners and multislice CT scanners, Toshiba Medical Systems Corporation has developed the Aquilion ONE, a CT scanner with a scanning range of up to 160 mm per rotation that can obtain three-dimensional (3D) images of the brain, heart, and other organs in a single rotation. We have now developed the Aquilion ONE / ViSION Edition, a next-generation 320-row multislice CT scanner incorporating the latest technologies that achieves a shorter scanning time and significant reduction in dose compared with conventional products. This product with its low-dose scanning technology will contribute to the practical realization of new diagnosis and treatment modalities employing four-dimensional (4D) data based on 3D dynamic observations through continuous rotations. (author)

  7. Contrast masking in strabismic amblyopia: attenuation, noise, interocular suppression and binocular summation.

    Science.gov (United States)

    Baker, Daniel H; Meese, Tim S; Hess, Robert F

    2008-07-01

    To investigate amblyopic contrast vision at threshold and above we performed pedestal-masking (contrast discrimination) experiments with a group of eight strabismic amblyopes using horizontal sinusoidal gratings (mainly 3c/deg) in monocular, binocular and dichoptic configurations balanced across eye (i.e. five conditions). With some exceptions in some observers, the four main results were as follows. (1) For the monocular and dichoptic conditions, sensitivity was less in the amblyopic eye than in the good eye at all mask contrasts. (2) Binocular and monocular dipper functions superimposed in the good eye. (3) Monocular masking functions had a normal dipper shape in the good eye, but facilitation was diminished in the amblyopic eye. (4) A less consistent result was normal facilitation in dichoptic masking when testing the good eye, but a loss of this when testing the amblyopic eye. This pattern of amblyopic results was replicated in a normal observer by placing a neutral density filter in front of one eye. The two-stage model of binocular contrast gain control [Meese, T.S., Georgeson, M.A. & Baker, D.H. (2006). Binocular contrast vision at and above threshold. Journal of Vision 6, 1224-1243.] was 'lesioned' in several ways to assess the form of the amblyopic deficit. The most successful model involves attenuation of signal and an increase in noise in the amblyopic eye, and intact stages of interocular suppression and binocular summation. This implies a behavioural influence from monocular noise in the amblyopic visual system as well as in normal observers with an ND filter over one eye.

  8. Analysis the macular ganglion cell complex thickness in monocular strabismic amblyopia patients by Fourier-domain OCT

    Directory of Open Access Journals (Sweden)

    Hong-Wei Deng

    2014-11-01

    Full Text Available AIM: To detect the macular ganglion cell complex thickness in monocular strabismus amblyopia patients, in order to explore the relationship between the degree of amblyopia and retinal ganglion cell complex thickness, and found out whether there is abnormal macular ganglion cell structure in strabismic amblyopia. METHODS: Using a fourier-domain optical coherence tomography(FD-OCTinstrument iVue®(Optovue Inc, Fremont, CA, Macular ganglion cell complex(mGCCthickness was measured and statistical the relation rate with the best vision acuity correction was compared Gman among 26 patients(52 eyesincluded in this study. RESULTS: The mean thickness of the mGCC in macular was investigated into three parts: centrial, inner circle(3mmand outer circle(6mm. The mean thicknesses of mGCC in central, inner and outer circle was 50.74±21.51μm, 101.4±8.51μm, 114.2±9.455μm in the strabismic amblyopia eyes(SAE, and 43.79±11.92μm,92.47±25.01μm, 113.3±12.88μm in the contralateral sound eyes(CSErespectively. There was no statistically significant difference among the eyes(P>0.05. But the best corrected vision acuity had a good correlation rate between mGcc thicknesses, which was better relative for the lower part than the upper part.CONCLUSION:There is a relationship between the amblyopia vision acuity and the mGCC thickness. Although there has not statistically significant difference of the mGCC thickness compared with the SAE and CSE. To measure the macular center mGCC thickness in clinic may understand the degree of amblyopia.

  9. Assessment of Laparoscopic Skills Performance: 2D Versus 3D Vision and Classic Instrument Versus New Hand-Held Robotic Device for Laparoscopy.

    Science.gov (United States)

    Leite, Mariana; Carvalho, Ana F; Costa, Patrício; Pereira, Ricardo; Moreira, Antonio; Rodrigues, Nuno; Laureano, Sara; Correia-Pinto, Jorge; Vilaça, João L; Leão, Pedro

    2016-02-01

    Laparoscopic surgery has undeniable advantages, such as reduced postoperative pain, smaller incisions, and faster recovery. However, to improve surgeons' performance, ergonomic adaptations of the laparoscopic instruments and introduction of robotic technology are needed. The aim of this study was to ascertain the influence of a new hand-held robotic device for laparoscopy (HHRDL) and 3D vision on laparoscopic skills performance of 2 different groups, naïve and expert. Each participant performed 3 laparoscopic tasks-Peg transfer, Wire chaser, Knot-in 4 different ways. With random sequencing we assigned the execution order of the tasks based on the first type of visualization and laparoscopic instrument. Time to complete each laparoscopic task was recorded and analyzed with one-way analysis of variance. Eleven experts and 15 naïve participants were included. Three-dimensional video helps the naïve group to get better performance in Peg transfer, Wire chaser 2 hands, and Knot; the new device improved the execution of all laparoscopic tasks (P < .05). For expert group, the 3D video system benefited them in Peg transfer and Wire chaser 1 hand, and the robotic device in Peg transfer, Wire chaser 1 hand, and Wire chaser 2 hands (P < .05). The HHRDL helps the execution of difficult laparoscopic tasks, such as Knot, in the naïve group. Three-dimensional vision makes the laparoscopic performance of the participants without laparoscopic experience easier, unlike those with experience in laparoscopic procedures. © The Author(s) 2015.

  10. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    Science.gov (United States)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  11. Bilateral symmetry in vision and influence of ocular surgical procedures on binocular vision: A topical review

    Directory of Open Access Journals (Sweden)

    Samuel Arba Mosquera

    2016-10-01

    Full Text Available We analyze the role of bilateral symmetry in enhancing binocular visual ability in human eyes, and further explore how efficiently bilateral symmetry is preserved in different ocular surgical procedures. The inclusion criterion for this review was strict relevance to the clinical questions under research. Enantiomorphism has been reported in lower order aberrations, higher order aberrations and cone directionality. When contrast differs in the two eyes, binocular acuity is better than monocular acuity of the eye that receives higher contrast. Anisometropia has an uncommon occurrence in large populations. Anisometropia seen in infancy and childhood is transitory and of little consequence for the visual acuity. Binocular summation of contrast signals declines with age, independent of inter-ocular differences. The symmetric associations between the right and left eye could be explained by the symmetry in pupil offset and visual axis which is always nasal in both eyes. Binocular summation mitigates poor visual performance under low luminance conditions and strong inter-ocular disparity detrimentally affects binocular summation. Considerable symmetry of response exists in fellow eyes of patients undergoing myopic PRK and LASIK, however the method to determine whether or not symmetry is maintained consist of comparing individual terms in a variety of ad hoc ways both before and after the refractive surgery, ignoring the fact that retinal image quality for any individual is based on the sum of all terms. The analysis of bilateral symmetry should be related to the patients’ binocular vision status. The role of aberrations in monocular and binocular vision needs further investigation.

  12. Monocular channels have a functional role in endogenous orienting.

    Science.gov (United States)

    Saban, William; Sekely, Liora; Klein, Raymond M; Gabay, Shai

    2018-03-01

    The literature has long emphasized the role of higher cortical structures in endogenous orienting. Based on evolutionary explanation and previous data, we explored the possibility that lower monocular channels may also have a functional role in endogenous orienting of attention. Sensitive behavioral manipulation was used to probe the contribution of monocularly segregated regions in a simple cue - target detection task. A central spatially informative cue, and its ensuing target, were presented to the same or different eyes at varying cue-target intervals. Results indicated that the onset of endogenous orienting was apparent earlier when the cue and target were presented to the same eye. The data provides converging evidence for the notion that endogenous facilitation is modulated by monocular portions of the visual stream. This, in turn, suggests that higher cortical mechanisms are not exclusively responsible for endogenous orienting, and that a dynamic interaction between higher and lower neural levels, might be involved. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Robot Navigation Control Based on Monocular Images: An Image Processing Algorithm for Obstacle Avoidance Decisions

    Directory of Open Access Journals (Sweden)

    William Benn

    2012-01-01

    Full Text Available This paper covers the use of monocular vision to control autonomous navigation for a robot in a dynamically changing environment. The solution focused on using colour segmentation against a selected floor plane to distinctly separate obstacles from traversable space: this is then supplemented with canny edge detection to separate similarly coloured boundaries to the floor plane. The resulting binary map (where white identifies an obstacle-free area and black identifies an obstacle could then be processed by fuzzy logic or neural networks to control the robot’s next movements. Findings show that the algorithm performed strongly on solid coloured carpets, wooden, and concrete floors but had difficulty in separating colours in multicoloured floor types such as patterned carpets.

  14. Development of a system based on 3D vision, interactive virtual environments, ergonometric signals and a humanoid for stroke rehabilitation.

    Science.gov (United States)

    Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra

    2013-11-01

    This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Perception of 3-D location based on vision, touch, and extended touch.

    Science.gov (United States)

    Giudice, Nicholas A; Klatzky, Roberta L; Bennett, Christopher R; Loomis, Jack M

    2013-01-01

    Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated. This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate.

  16. Fractographic classification in metallic materials by using 3D processing and computer vision techniques

    Directory of Open Access Journals (Sweden)

    Maria Ximena Bastidas-Rodríguez

    2016-09-01

    Full Text Available Failure analysis aims at collecting information about how and why a failure is produced. The first step in this process is a visual inspection on the flaw surface that will reveal the features, marks, and texture, which characterize each type of fracture. This is generally carried out by personnel with no experience that usually lack the knowledge to do it. This paper proposes a classification method for three kinds of fractures in crystalline materials: brittle, fatigue, and ductile. The method uses 3D vision, and it is expected to support failure analysis. The features used in this work were: i Haralick’s features and ii the fractal dimension. These features were applied to 3D images obtained from a confocal laser scanning microscopy Zeiss LSM 700. For the classification, we evaluated two classifiers: Artificial Neural Networks and Support Vector Machine. The performance evaluation was made by extracting four marginal relations from the confusion matrix: accuracy, sensitivity, specificity, and precision, plus three evaluation methods: Receiver Operating Characteristic space, the Individual Classification Success Index, and the Jaccard’s coefficient. Despite the classification percentage obtained by an expert is better than the one obtained with the algorithm, the algorithm achieves a classification percentage near or exceeding the 60 % accuracy for the analyzed failure modes. The results presented here provide a good approach to address future research on texture analysis using 3D data.

  17. The prevalence of vision loss due to ocular trauma in the Australian National Eye Health Survey.

    Science.gov (United States)

    Keel, Stuart; Xie, Jing; Foreman, Joshua; Taylor, Hugh R; Dirani, Mohamed

    2017-11-01

    To determine the prevalence of vision loss due to ocular trauma in Australia. The National Eye Health Survey (NEHS) is a population-based cross-sectional study that examined 3098 non-Indigenous Australians (aged 50-98 years) and 1738 Indigenous Australians (aged 40-92 years) living in 30 randomly selected sites, stratified by remoteness. An eye was considered to have vision loss due to trauma if the best-corrected visual acuity was worse than 6/12 and the main cause was attributed to ocular trauma. This determination was made by two independent ophthalmologists and any disagreements were adjudicated by a third senior ophthalmologist. The sampling weight adjusted prevalence of vision loss due to ocular trauma in non-Indigenous Australians aged 50 years and older and Indigenous Australians aged 40 years and over was 0.24% (95%CI: 0.10, 0.52) and 0.79% (95%CI: 0.56, 1.13), respectively. Trauma was attributed as an underlying cause of bilateral vision loss in one Indigenous participant, with all other cases being monocular. Males displayed a higher prevalence of vision loss from ocular trauma than females in both the non-Indigenous (0.47% vs. 1.25%, p=0.03) and Indigenous populations (0.12% vs. 0.38%, p=0.02). After multivariate adjustments, residing in Very Remote geographical areas was associated with higher odds of vision loss from ocular trauma. We estimate that 2.4 per 1000 non-Indigenous and 7.9 per 1000 Indigenous Australian adults have monocular vision loss due to a previous severe ocular trauma. Our findings indicate that males, Indigenous Australians and those residing in Very Remote communities may benefit from targeted health promotion to improve awareness of trauma prevention strategies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A 3D vision system for the measurement of the rate of spread and the height of fire fronts

    International Nuclear Information System (INIS)

    Rossi, L; Molinier, T; Tison, Y; Pieri, A; Akhloufi, M

    2010-01-01

    This paper presents a three-dimensional (3D) vision-based instrumentation system for the measurement of the rate of spread and height of complex fire fronts. The proposed 3D imaging system is simple, does not require calibration, is easily deployable in indoor and outdoor environments and can handle complex fire fronts. New approaches for measuring the position, the rate of spread and the height of a fire front during its propagation are introduced. Experiments were conducted in indoor and outdoor conditions with fires of different scales. Linear and curvilinear fire front spreading were studied. The obtained results are promising and show the interesting performance of the proposed system in operational and complex fire scenarios

  19. Action Control: Independent Effects of Memory and Monocular Viewing on Reaching Accuracy

    Science.gov (United States)

    Westwood, D.A.; Robertson, C.; Heath, M.

    2005-01-01

    Evidence suggests that perceptual networks in the ventral visual pathway are necessary for action control when targets are viewed with only one eye, or when the target must be stored in memory. We tested whether memory-linked (i.e., open-loop versus memory-guided actions) and monocular-linked effects (i.e., binocular versus monocular actions) on…

  20. Brightness, hue, and saturation in photopic vision: a result of luminance and wavelength in the cellular phase-grating optical 3D chip of the inverted retina

    Science.gov (United States)

    Lauinger, Norbert

    1994-10-01

    In photopic vision, two physical variables (luminance and wavelength) are transformed into three psychological variables (brightness, hue, and saturation). Following on from 3D grating optical explanations of aperture effects (Stiles-Crawford effects SCE I and II), all three variables can be explained via a single 3D chip effect. The 3D grating optical calculations are carried out using the classical von Laue equation and demonstrated using the example of two experimentally confirmed observations in human vision: saturation effects for monochromatic test lights between 485 and 510 nm in the SCE II and the fact that many test lights reverse their hue shift in the SCE II when changing from moderate to high luminances compared with that on changing from low to medium luminances. At the same time, information is obtained on the transition from the trichromatic color system in the retina to the opponent color system.

  1. The effects of left and right monocular viewing on hemispheric activation.

    Science.gov (United States)

    Wang, Chao; Burtis, D Brandon; Ding, Mingzhou; Mo, Jue; Williamson, John B; Heilman, Kenneth M

    2018-03-01

    Prior research has revealed that whereas activation of the left hemisphere primarily increases the activity of the parasympathetic division of the autonomic nervous system, right-hemisphere activation increases the activity of the sympathetic division. In addition, each hemisphere primarily receives retinocollicular projections from the contralateral eye. A prior study reported that pupillary dilation was greater with left- than with right-eye monocular viewing. The goal of this study was to test the alternative hypotheses that this asymmetric pupil dilation with left-eye viewing was induced by activation of the right-hemispheric-mediated sympathetic activity, versus a reduction of left-hemisphere-mediated parasympathetic activity. Thus, this study was designed to learn whether there are changes in hemispheric activation, as measured by alteration of spontaneous alpha activity, during right versus left monocular viewing. High-density electroencephalography (EEG) was recorded from healthy participants viewing a crosshair with their right, left, or both eyes. There was a significantly less alpha power over the right hemisphere's parietal-occipital area with left and binocular viewing than with right-eye monocular viewing. The greater relative reduction of right-hemisphere alpha activity during left than during right monocular viewing provides further evidence that left-eye viewing induces greater increase in right-hemisphere activation than does right-eye viewing.

  2. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    Science.gov (United States)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  3. Does monocular visual space contain planes?

    NARCIS (Netherlands)

    Koenderink, J.J.; Albertazzi, L.; Doorn, A.J. van; Ee, R. van; Grind, W.A. van de; Kappers, A.M.L.; Lappin, J.S.; Norman, J.F.; Oomes, A.H.J.; Pas, S.F. te; Phillips, F.; Pont, S.C.; Richards, W.A.; Todd, J.T.; Verstraten, F.A.J.; Vries, S.C. de

    2010-01-01

    The issue of the existence of planes—understood as the carriers of a nexus of straight lines—in the monocular visual space of a stationary human observer has never been addressed. The most recent empirical data apply to binocular visual space and date from the 1960s (Foley, 1964). This appears to be

  4. Does monocular visual space contain planes?

    NARCIS (Netherlands)

    Koenderink, Jan J.; Albertazzi, Liliana; van Doorn, Andrea J.; van Ee, Raymond; van de Grind, Wim A.; Kappers, Astrid M L; Lappin, Joe S.; Farley Norman, J.; (Stijn) Oomes, A. H J; te Pas, Susan P.; Phillips, Flip; Pont, Sylvia C.; Richards, Whitman A.; Todd, James T.; Verstraten, Frans A J; de Vries, Sjoerd

    The issue of the existence of planes-understood as the carriers of a nexus of straight lines-in the monocular visual space of a stationary human observer has never been addressed. The most recent empirical data apply to binocular visual space and date from the 1960s (Foley, 1964). This appears to be

  5. A laminar cortical model of stereopsis and 3D surface perception: closure and da Vinci stereopsis.

    Science.gov (United States)

    Cao, Yongqiang; Grossberg, Stephen

    2005-01-01

    A laminar cortical model of stereopsis and 3D surface perception is developed and simulated. The model describes how monocular and binocular oriented filtering interact with later stages of 3D boundary formation and surface filling-in in the LGN and cortical areas V1, V2, and V4. It proposes how interactions between layers 4, 3B, and 2/3 in V1 and V2 contribute to stereopsis, and how binocular and monocular information combine to form 3D boundary and surface representations. The model includes two main new developments: (1) It clarifies how surface-to-boundary feedback from V2 thin stripes to pale stripes helps to explain data about stereopsis. This feedback has previously been used to explain data about 3D figure-ground perception. (2) It proposes that the binocular false match problem is subsumed under the Gestalt grouping problem. In particular, the disparity filter, which helps to solve the correspondence problem by eliminating false matches, is realized using inhibitory interneurons as part of the perceptual grouping process by horizontal connections in layer 2/3 of cortical area V2. The enhanced model explains all the psychophysical data previously simulated by Grossberg and Howe (2003), such as contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, Panum's limiting case, the Venetian blind illusion, stereopsis with polarity-reversed stereograms, and da Vinci stereopsis. It also explains psychophysical data about perceptual closure and variations of da Vinci stereopsis that previous models cannot yet explain.

  6. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot.

    Science.gov (United States)

    Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin

    2015-04-22

    Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called "Octopus", which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  7. Panoramic stereo sphere vision

    Science.gov (United States)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  8. A multimodal 3D framework for fire characteristics estimation

    Science.gov (United States)

    Toulouse, T.; Rossi, L.; Akhloufi, M. A.; Pieri, A.; Maldague, X.

    2018-02-01

    In the last decade we have witnessed an increasing interest in using computer vision and image processing in forest fire research. Image processing techniques have been successfully used in different fire analysis areas such as early detection, monitoring, modeling and fire front characteristics estimation. While the majority of the work deals with the use of 2D visible spectrum images, recent work has introduced the use of 3D vision in this field. This work proposes a new multimodal vision framework permitting the extraction of the three-dimensional geometrical characteristics of fires captured by multiple 3D vision systems. The 3D system is a multispectral stereo system operating in both the visible and near-infrared (NIR) spectral bands. The framework supports the use of multiple stereo pairs positioned so as to capture complementary views of the fire front during its propagation. Multimodal registration is conducted using the captured views in order to build a complete 3D model of the fire front. The registration process is achieved using multisensory fusion based on visual data (2D and NIR images), GPS positions and IMU inertial data. Experiments were conducted outdoors in order to show the performance of the proposed framework. The obtained results are promising and show the potential of using the proposed framework in operational scenarios for wildland fire research and as a decision management system in fighting.

  9. Preliminary Results for a Monocular Marker-Free Gait Measurement System

    Directory of Open Access Journals (Sweden)

    Jane Courtney

    2006-01-01

    Full Text Available This paper presents results from a novel monocular marker-free gait measurement system. The system was designed for physical and occupational therapists to monitor the progress of patients through therapy. It is based on a novel human motion capturemethod derived from model-based tracking. Testing is performed on two monocular, sagittal-view, sample gait videos – one with both the environment and the subject’s appearance and movement restricted and one in a natural environment with unrestrictedclothing and motion. Results of the modelling, tracking and analysis stages are presented along with standard gait graphs and parameters.

  10. Development of an auto-welding system for CRD nozzle repair welds using a 3D laser vision sensor

    International Nuclear Information System (INIS)

    Park, K.; Kim, Y.; Byeon, J.; Sung, K.; Yeom, C.; Rhee, S.

    2007-01-01

    A control rod device (CRD) nozzle attaches to the hemispherical surface of a reactor head with J-groove welding. Primary water stress corrosion cracking (PWSCC) causes degradation in these welds, which requires that these defect areas be repaired. To perform this repair welding automatically on a complicated weld groove shape, an auto-welding system was developed incorporating a laser vision sensor that measures the 3-dimensional (3D) shape of the groove and a weld-path creation program that calculates the weld-path parameters. Welding trials with a J-groove workpiece were performed to establish a basis for developing this auto-welding system. Because the reactor head is placed on a lay down support, the outer-most region of the CRD nozzle has restricted access. Due to this tight space, several parameters of the design, such as size, weight and movement of the auto-welding system, had to be carefully considered. The cross section of the J-groove weld is basically an oval shape where the included angle of the J-groove ranges from 0 to 57 degrees. To measure the complex shape, we used double lasers coupled to a single charge coupled device (CCD) camera. We then developed a program to generate the weld-path parameters using the measured 3D shape as a basis. The program has the ability to determine the first and final welding positions and to calculate all weld-path parameters. An optimized image-processing algorithm was applied to resolve noise interference and diffused reflection of the joint surfaces. The auto-welding system is composed of a 4-axis manipulator, gas tungsten arc welding (GTAW) power supply, an optimized designed and manufactured GTAW torch and a 3D laser vision sensor. Through welding trials with 0 and 38-degree included-angle workpieces with both J-groove and U-groove weld, the performance of this auto-welding system was qualified for field application

  11. Autonomous Landing and Ingress of Micro-Air-Vehicles in Urban Environments Based on Monocular Vision

    Science.gov (United States)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-01-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  12. 3D geometric phase analysis and its application in 3D microscopic morphology measurement

    Science.gov (United States)

    Zhu, Ronghua; Shi, Wenxiong; Cao, Quankun; Liu, Zhanwei; Guo, Baoqiao; Xie, Huimin

    2018-04-01

    Although three-dimensional (3D) morphology measurement has been widely applied on the macro-scale, there is still a lack of 3D measurement technology on the microscopic scale. In this paper, a microscopic 3D measurement technique based on the 3D-geometric phase analysis (GPA) method is proposed. In this method, with machine vision and phase matching, the traditional GPA method is extended to three dimensions. Using this method, 3D deformation measurement on the micro-scale can be realized using a light microscope. Simulation experiments were conducted in this study, and the results demonstrate that the proposed method has a good anti-noise ability. In addition, the 3D morphology of the necking zone in a tensile specimen was measured, and the results demonstrate that this method is feasible.

  13. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    Science.gov (United States)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  14. Multimodal Registration and Fusion for 3D Thermal Imaging

    Directory of Open Access Journals (Sweden)

    Moulay A. Akhloufi

    2015-01-01

    Full Text Available 3D vision is an area of computer vision that has attracted a lot of research interest and has been widely studied. In recent years we witness an increasing interest from the industrial community. This interest is driven by the recent advances in 3D technologies, which enable high precision measurements at an affordable cost. With 3D vision techniques we can conduct advanced manufactured parts inspections and metrology analysis. However, we are not able to detect subsurface defects. This kind of detection is achieved by other techniques, like infrared thermography. In this work, we present a new registration framework for 3D and thermal infrared multimodal fusion. The resulting fused data can be used for advanced 3D inspection in Nondestructive Testing and Evaluation (NDT&E applications. The fusion permits the simultaneous visible surface and subsurface inspections to be conducted in the same process. Experimental tests were conducted with different materials. The obtained results are promising and show how these new techniques can be used efficiently in a combined NDT&E-Metrology analysis of manufactured parts, in areas such as aerospace and automotive.

  15. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... 3-D effect can confuse or overload the brain, causing some people discomfort even if they have normal vision. Taking a break from viewing usually relieves the discomfort. More on computer use and your eyes . Children and 3-D Technology Following the lead of Nintendo, several 3-D ...

  16. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... function in children, nor are there persuasive, conclusive theories on how 3-D digital products could cause damage in children with healthy eyes. The development of normal 3-D vision in children is ...

  17. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot

    Directory of Open Access Journals (Sweden)

    Xun Chai

    2015-04-01

    Full Text Available Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called “Octopus”, which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  18. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Directory of Open Access Journals (Sweden)

    Shanis Barnard

    Full Text Available Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is

  19. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    Science.gov (United States)

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  20. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Science.gov (United States)

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  1. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... techniques used to create the 3-D effect can confuse or overload the brain, causing some people ... images. That does not mean that vision disorders can be caused by 3-D digital products. However, ...

  2. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... function in children, nor are there persuasive, conclusive theories on how 3-D digital products could cause ... or other conditions that persistently inhibit focusing, depth perception or normal 3-D vision, would have difficulty ...

  3. Landing performance by low-time private pilots after the sudden loss of binocular vision - Cyclops II

    Science.gov (United States)

    Lewis, C. E., Jr.; Swaroop, R.; Mcmurty, T. C.; Blakeley, W. R.; Masters, R. L.

    1973-01-01

    Study of low-time general aviation pilots, who, in a series of spot landings, were suddenly deprived of binocular vision by patching either eye on the downwind leg of a standard, closed traffic pattern. Data collected during these landings were compared with control data from landings flown with normal vision during the same flight. The sequence of patching and the mix of control and monocular landings were randomized to minimize the effect of learning. No decrease in performance was observed during landings with vision restricted to one eye, in fact, performance improved. This observation is reported at a high level of confidence (p less than 0.001). These findings confirm the previous work of Lewis and Krier and have important implications with regard to aeromedical certification standards.

  4. Estimating 3D Object Parameters from 2D Grey-Level Images

    NARCIS (Netherlands)

    Houkes, Z.

    2000-01-01

    This thesis describes a general framework for parameter estimation, which is suitable for computer vision applications. The approach described combines 3D modelling, animation and estimation tools to determine parameters of objects in a scene from 2D grey-level images. The animation tool predicts

  5. Assessment of Color Vision Among School Children: A Comparative Study Between The Ishihara Test and The Farnsworth D-15 Test

    Directory of Open Access Journals (Sweden)

    Rajesh Kishor Shrestha

    2016-10-01

    Full Text Available Introduction: Color vision is one of the important attribute of visual perception. The study was conducted at different schools of Kathmandu to compare the  ndings of the Ishihara Pseudoisochromatic test and the Farnsworth D-15 test.  Method: A cross-sectional study was conducted among 2120 students of four schools of Kathmandu. Assessment included visual acuity measurement, slit lamp examination of anterior segment and fundus examination with direct ophthalmoscopy. Each student was assessed with the Ishihara pseudoisochromatic test and the Farnsworth D-15 test. The Chi-square test was performed to analyse color vision defect detected by the Ishihara test and the Farnsworth D-15 test. Results: A total of 2120 students comprising of 1114 males (52.5% and 1006 females (47.5% were recruited in the study with mean age of 12.2 years (SD 2.3 years. The prevalence of color vision defect as indicated by the Ishihara was 2.6 and as indicated by the D-15 test was 2.15 in males.  Conclusion: For school color vision screening, the Ishihara color test and the Farnsworth D-15 test have equal capacity to detect congenital color vision defect and they complement each other.  Keywords: color vision; children; defect; Farnsworth D-15; Ishihara.

  6. Color vision tests comparison: Farnsworth D-15 versus Lanthony D-15

    Science.gov (United States)

    Szmigiel, Marta; Geniusz, Malwina; Geniusz, Maciej K.

    2017-09-01

    Disorder of color vision in humans is the inability to perceive differences between some or all of the colors that are normally perceived by others. Color blindness is usually a birth defect, a genetically determined. For this reason it is much more common in men than women. This paper presents the results of the test FarnsworthD-15 and Lanthony D-15 on a group of volunteers, both adults and children. The study was conducted to compare the results of both tests.

  7. Visual impairment secondary to congenital glaucoma in children: visual responses, optical correction and use of low vision AIDS

    Directory of Open Access Journals (Sweden)

    Maria Aparecida Onuki Haddad

    2009-01-01

    Full Text Available INTRODUCTION: Congenital glaucoma is frequently associated with visual impairment due to optic nerve damage, corneal opacities, cataracts and amblyopia. Poor vision in childhood is related to global developmental problems, and referral to vision habilitation/rehabilitation services should be without delay to promote efficient management of the impaired vision. OBJECTIVE: To analyze data concerning visual response, the use of optical correction and prescribed low vision aids in a population of children with congenital glaucoma. METHOD: The authors analyzed data from 100 children with congenital glaucoma to assess best corrected visual acuity, prescribed optical correction and low vision aids. RESULTS: Fifty-five percent of the sample were male, 43% female. The mean age was 6.3 years. Two percent presented normal visual acuity levels, 29% mild visual impairment, 28% moderate visual impairment, 15% severe visual impairment, 11% profound visual impairment, and 15% near blindness. Sixty-eight percent received optical correction for refractive errors. Optical low vision aids were adopted for distance vision in 34% of the patients and for near vision in 6%. A manual monocular telescopic system with 2.8 × magnification was the most frequently prescribed low vision aid for distance, and for near vision a +38 diopter illuminated stand magnifier was most frequently prescribed. DISCUSSION AND CONCLUSION: Careful low vision assessment and the appropriate prescription of optical corrections and low vision aids are mandatory in children with congenital glaucoma, since this will assist their global development, improving efficiency in daily life activities and promoting social and educational inclusion.

  8. A hand-held 3D laser scanning with global positioning system of subvoxel precision

    International Nuclear Information System (INIS)

    Arias, Nestor; Meneses, Nestor; Meneses, Jaime; Gharbi, Tijani

    2011-01-01

    In this paper we propose a hand-held 3D laser scanner composed of an optical head device to extract 3D local surface information and a stereo vision system with subvoxel precision to measure the position and orientation of the 3D optical head. The optical head is manually scanned over the surface object by the operator. The orientation and position of the 3D optical head is determined by a phase-sensitive method using a 2D regular intensity pattern. This phase reference pattern is rigidly fixed to the optical head and allows their 3D location with subvoxel precision in the observation field of the stereo vision system. The 3D resolution achieved by the stereo vision system is about 33 microns at 1.8 m with an observation field of 60cm x 60cm.

  9. Monocular perceptual learning of contrast detection facilitates binocular combination in adults with anisometropic amblyopia.

    Science.gov (United States)

    Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin

    2016-02-01

    Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n = 13) were asked to complete two psychophysical supra-threshold binocular summation tasks: (1) binocular phase combination and (2) dichoptic global motion coherence before and after monocular training to investigate this question. We showed that these participants benefited from monocular training in terms of binocular combination. More importantly, the improvements observed with the area under log CSF (AULCSF) were found to be correlated with the improvements in binocular phase combination.

  10. Flash 3D Rendezvous and Docking Sensor, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — 3D Flash Ladar is a breakthrough technology for many emerging and existing 3D vision areas, and sensor improvements will have an impact on nearly all these fields....

  11. 3D Reconstruction of NMR Images

    Directory of Open Access Journals (Sweden)

    Peter Izak

    2007-01-01

    Full Text Available This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  12. Prevalence and causes of low vision and blindness in a rural Southwest Island of Japan: the Kumejima study.

    Science.gov (United States)

    Nakamura, Yuko; Tomidokoro, Atsuo; Sawaguchi, Shoichi; Sakai, Hiroshi; Iwase, Aiko; Araie, Makoto

    2010-12-01

    To determine the prevalence and causes of low vision and blindness in an adult population on a rural southwest island of Japan. Population-based, cross-sectional study. All residents of Kumejima Island, Japan, 40 years of age and older. Of the 4632 residents 40 years of age and older, 3762 (response rate, 81.2%) underwent a detailed ocular examination including measurement of the best-corrected visual acuity (BCVA) with a Landolt ring chart at 5 m. The age- and gender-specific prevalence rates of low vision and blindness were estimated and causes were identified. Low vision and blindness were defined, according to the definition of the World Health Organization, as a BCVA in the better eye below 20/60 to a lower limit of 20/400 and worse than 20/400, respectively. The prevalence of bilateral low vision was 0.58% (95% confidence interval [CI], 0.38-0.89). The primary causes of low vision were cataract (0.11%), corneal opacity (0.08%), retinitis pigmentosa (RP; 0.06%), and diabetic retinopathy (0.06%). The prevalence of bilateral blindness was 0.39% (95% CI, 0.23-0.65). The primary causes of blindness were RP (0.17%) and glaucoma (0.11%). The primary causes of monocular low vision were cataract (0.65%), corneal opacity (0.16%), age-related macular degeneration (0.16%), and diabetic retinopathy (0.11%), whereas those of monocular blindness were cataract (0.29%), trauma (0.25%), and glaucoma (0.22%). Logistic analysis showed that female gender (P = 0.001; odds ratio [OR], 7.37; 95% CI, 2.20-24.71) and lower body weight (P = 0.015; OR, 0.94; 95% CI, 0.90-0.99) were associated significantly with visual impairment. The prevalences of low vision and blindness in the adult residents of an island in southwest Japan were 1.5 to 3 times higher than the prevalences reported in an urban city on the Japanese mainland. The prevalence of visual impairment caused by RP on this island was much higher than on the mainland, suggesting a genetic characteristic of the population

  13. Perspectives on Materials Science in 3D

    DEFF Research Database (Denmark)

    Juul Jensen, Dorte

    2012-01-01

    Materials characterization in 3D has opened a new era in materials science, which is discussed in this paper. The original motivations and visions behind the development of one of the new 3D techniques, namely the three dimensional x-ray diffraction (3DXRD) method, are presented and the route...... to its implementation is described. The present status of materials science in 3D is illustrated by examples related to recrystallization. Finally, challenges and suggestions for the future success for 3D Materials Science relating to hardware evolution, data analysis, data exchange and modeling...

  14. A Hybrid Architecture for Vision-Based Obstacle Avoidance

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Güzel

    2013-01-01

    Full Text Available This paper proposes a new obstacle avoidance method using a single monocular vision camera as the only sensor which is called as Hybrid Architecture. This architecture integrates a high performance appearance-based obstacle detection method into an optical flow-based navigation system. The hybrid architecture was designed and implemented to run both methods simultaneously and is able to combine the results of each method using a novel arbitration mechanism. The proposed strategy successfully fused two different vision-based obstacle avoidance methods using this arbitration mechanism in order to permit a safer obstacle avoidance system. Accordingly, to establish the adequacy of the design of the obstacle avoidance system, a series of experiments were conducted. The results demonstrate the characteristics of the proposed architecture, and the results prove that its performance is somewhat better than the conventional optical flow-based architecture. Especially, the robot employing Hybrid Architecture avoids lateral obstacles in a more smooth and robust manner than when using the conventional optical flow-based technique.

  15. Intelligent Vision System for Door Sensing Mobile Robot

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-08-01

    Full Text Available Wheeled Mobile Robots find numerous applications in the Indoor man made structured environments. In order to operate effectively, the robots must be capable of sensing its surroundings. Computer Vision is one of the prime research areas directed towards achieving these sensing capabilities. In this paper, we present a Door Sensing Mobile Robot capable of navigating in the indoor environment. A robust and inexpensive approach for recognition and classification of the door, based on monocular vision system helps the mobile robot in decision making. To prove the efficacy of the algorithm we have designed and developed a ‘Differentially’ Driven Mobile Robot. A wall following behavior using Ultra Sonic range sensors is employed by the mobile robot for navigation in the corridors.  Field Programmable Gate Arrays (FPGA have been used for the implementation of PD Controller for wall following and PID Controller to control the speed of the Geared DC Motor.

  16. Linear study and bundle adjustment data fusion; Application to vision localization

    International Nuclear Information System (INIS)

    Michot, J.

    2010-01-01

    The works presented in this manuscript are in the field of computer vision, and tackle the problem of real-time vision based localization and 3D reconstruction. In this context, the trajectory of a camera and the 3D structure of the filmed scene are initially estimated by linear algorithms and then optimized by a nonlinear algorithm, bundle adjustment. The thesis first presents a new technique of line search, dedicated to the nonlinear minimization algorithms used in Structure-from-Motion. The proposed technique is not iterative and can be quickly installed in traditional bundle adjustment frameworks. This technique, called Global Algebraic Line Search (G-ALS), and its two-dimensional variant (Two way-ALS), accelerate the convergence of the bundle adjustment algorithm. The approximation of the re-projection error by an algebraic distance enables the analytical calculation of an effective displacement amplitude (or two amplitudes for the Two way-ALS variant) by solving a degree 3 (G-ALS) or 5 (Two way-ALS) polynomial. Our experiments, conducted on simulated and real data, show that this amplitude, which is optimal for the algebraic distance, is also efficient for the Euclidean distance and reduces the convergence time of minimizations. One difficulty of real-time tracking algorithms (monocular SLAM) is that the estimated trajectory is often affected by drifts: on the absolute orientation, position and scale. Since these algorithms are incremental, errors and approximations are accumulated throughout the trajectory and cause global drifts. In addition, a tracking vision system can always be dazzled or used under conditions which prevented temporarily to calculate the location of the system. To solve these problems, we propose to use an additional sensor measuring the displacement of the camera. The type of sensor used will vary depending on the targeted application (an odometer for a vehicle, a lightweight inertial navigation system for a person). We propose to

  17. Automatic micropart assembly of 3-Dimensional structure by vision based control

    International Nuclear Information System (INIS)

    Wang, Lidai; Kim, Seung Min

    2008-01-01

    We propose a vision control strategy to perform automatic microassembly tasks in three-dimension (3-D) and develop relevant control software: specifically, using a 6 degree-of-freedom (DOF) robotic workstation to control a passive microgripper to automatically grasp a designated micropart from the chip, pivot the micropart, and then move the micropart to be vertically inserted into a designated slot on the chip. In the proposed control strategy, the whole microassembly task is divided into two subtasks, micro-grasping and micro-joining, in sequence. To guarantee the success of microassembly and manipulation accuracy, two different two-stage feedback motion strategies, the pattern matching and auto-focus method are employed, with the use of vision-based control system and the vision control software developed. Experiments conducted demonstrate the efficiency and validity of the proposed control strategy

  18. Automatic micropart assembly of 3-Dimensional structure by vision based control

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Lidai [University of Toronto, Toronto (Canada); Kim, Seung Min [Korean Intellectual Property Office, Daejeon (Korea, Republic of)

    2008-12-15

    We propose a vision control strategy to perform automatic microassembly tasks in three-dimension (3-D) and develop relevant control software: specifically, using a 6 degree-of-freedom (DOF) robotic workstation to control a passive microgripper to automatically grasp a designated micropart from the chip, pivot the micropart, and then move the micropart to be vertically inserted into a designated slot on the chip. In the proposed control strategy, the whole microassembly task is divided into two subtasks, micro-grasping and micro-joining, in sequence. To guarantee the success of microassembly and manipulation accuracy, two different two-stage feedback motion strategies, the pattern matching and auto-focus method are employed, with the use of vision-based control system and the vision control software developed. Experiments conducted demonstrate the efficiency and validity of the proposed control strategy

  19. 3D vs 2D laparoscopic systems: Development of a performance quantitative validation model.

    Science.gov (United States)

    Ghedi, Andrea; Donarini, Erica; Lamera, Roberta; Sgroi, Giovanni; Turati, Luca; Ercole, Cesare

    2015-01-01

    The new technology ensures 3D laparoscopic vision by adding depth to the traditional two dimensions. This realistic vision gives the surgeon the feeling of operating in real space. Hospital of Treviglio-Caravaggio isn't an university or scientific institution; in 2014 a new 3D laparoscopic technology was acquired therefore it led to evaluation of the of the appropriateness in term of patient outcome and safety. The project aims at achieving the development of a quantitative validation model that would ensure low cost and a reliable measure of the performance of 3D technology versus 2D mode. In addition, it aims at demonstrating how new technologies, such as open source hardware and software and 3D printing, could help research with no significant cost increase. For these reasons, in order to define criteria of appropriateness in the use of 3D technologies, it was decided to perform a study to technically validate the use of the best technology in terms of effectiveness, efficiency and safety in the use of a system between laparoscopic vision in 3D and the traditional 2D. 30 surgeons were enrolled in order to perform an exercise through the use of laparoscopic forceps inside a trainer. The exercise consisted of having surgeons with different level of seniority, grouped by type of specialization (eg. surgery, urology, gynecology), exercising videolaparoscopy with two technologies (2D and 3D) through the use of a anthropometric phantom. The target assigned to the surgeon was that to pass "needle and thread" without touching the metal part in the shortest time possible. The rings selected for the exercise had each a coefficient of difficulty determined by depth, diameter, angle from the positioning and from the point of view. The analysis of the data collected from the above exercise has mathematically confirmed that the 3D technique ensures a learning curve lower in novice and greater accuracy in the performance of the task with respect to 2D.

  20. Monocular and binocular visual impairment in the UK Biobank study: prevalence, associations and diagnoses.

    Science.gov (United States)

    McKibbin, Martin; Farragher, Tracey M; Shickle, Darren

    2018-01-01

    To determine the prevalence of, associations with and diagnoses leading to mild visual impairment or worse (logMAR >0.3) in middle-aged adults in the UK Biobank study. Prevalence estimates for monocular and binocular visual impairment were determined for the UK Biobank participants with fundus photographs and spectral domain optical coherence tomography images. Associations with socioeconomic, biometric, lifestyle and medical variables were investigated for cases with visual impairment and matched controls, using multinomial logistic regression models. Self-reported eye history and image grading results were used to identify the primary diagnoses leading to visual impairment for a sample of 25% of cases. For the 65 033 UK Biobank participants, aged 40-69 years and with fundus images, 6682 (10.3%) and 1677 (2.6%) had mild visual impairment or worse in one or both eyes, respectively. Increasing deprivation, age and ethnicity were independently associated with both monocular and binocular visual impairment. No primary diagnosis for the recorded level of visual impairment could be identified for 49.8% of eyes. The most common identifiable diagnoses leading to visual impairment were cataract, amblyopia, uncorrected refractive error and vitreoretinal interface abnormalities. The prevalence of visual impairment in the UK Biobank study cohort is lower than for population-based studies from other industrialised countries. Monocular and binocular visual impairment are associated with increasing deprivation, age and ethnicity. The UK Biobank dataset does not allow confident identification of the causes of visual impairment, and the results may not be applicable to the wider UK population.

  1. Monocular and binocular visual impairment in the UK Biobank study: prevalence, associations and diagnoses

    Science.gov (United States)

    Farragher, Tracey M; Shickle, Darren

    2018-01-01

    Objective To determine the prevalence of, associations with and diagnoses leading to mild visual impairment or worse (logMAR >0.3) in middle-aged adults in the UK Biobank study. Methods and analysis Prevalence estimates for monocular and binocular visual impairment were determined for the UK Biobank participants with fundus photographs and spectral domain optical coherence tomography images. Associations with socioeconomic, biometric, lifestyle and medical variables were investigated for cases with visual impairment and matched controls, using multinomial logistic regression models. Self-reported eye history and image grading results were used to identify the primary diagnoses leading to visual impairment for a sample of 25% of cases. Results For the 65 033 UK Biobank participants, aged 40–69 years and with fundus images, 6682 (10.3%) and 1677 (2.6%) had mild visual impairment or worse in one or both eyes, respectively. Increasing deprivation, age and ethnicity were independently associated with both monocular and binocular visual impairment. No primary diagnosis for the recorded level of visual impairment could be identified for 49.8% of eyes. The most common identifiable diagnoses leading to visual impairment were cataract, amblyopia, uncorrected refractive error and vitreoretinal interface abnormalities. Conclusions The prevalence of visual impairment in the UK Biobank study cohort is lower than for population-based studies from other industrialised countries. Monocular and binocular visual impairment are associated with increasing deprivation, age and ethnicity. The UK Biobank dataset does not allow confident identification of the causes of visual impairment, and the results may not be applicable to the wider UK population. PMID:29657974

  2. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... video games will damage the eyes or visual system. Some people complain of headaches or motion sickness when viewing 3-D, ... damage in children with healthy eyes. The development of normal 3-D vision ... and natural environments, and this development is largely complete by age ...

  3. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... the techniques used to create the 3-D effect can confuse or overload the brain, causing some people discomfort even if they have normal vision. Taking a break from viewing usually relieves the discomfort. More on computer use and your eyes . Children and 3-D ...

  4. Distributed Monocular SLAM for Indoor Map Building

    Directory of Open Access Journals (Sweden)

    Ruwan Egodagamage

    2017-01-01

    Full Text Available Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps, thus requiring a distributed computational framework. Each agent can generate its own local map, which can then be combined into a map covering a larger area. By doing so, they can cover a given environment faster than a single agent. Furthermore, they can interact with each other in the same environment, making this framework more practical, especially for collaborative applications such as augmented reality. One of the main challenges of distributed SLAM is identifying overlapping maps, especially when relative starting positions of agents are unknown. In this paper, we are proposing a system having multiple monocular agents, with unknown relative starting positions, which generates a semidense global map of the environment.

  5. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D.

    Science.gov (United States)

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron; Gümüs, Zeynep H

    2017-08-01

    Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. © The Authors 2017. Published by Oxford University Press.

  6. Control monocular 3D dinámico basado en imagen

    Directory of Open Access Journals (Sweden)

    Luis Hernández Santana

    2011-09-01

    Full Text Available Normal 0 21 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} En este trabajo se presenta un sistema de control servovisual para regulación de posición de un robot manipulador con cámara en mano que se mueve en el espacio cartesiano 3D. El objetivo es control el robot de tal forma que la imagen de una esfera en movimiento se mantenga en el centro del plano imagen con radio constante. Se propone una estrategia de control con dos lazos en cascada, el lazo interno resuelve el control articular y el lazo externo implementa el control con realimentación visual. El robot y el sistema de visión son modelados para pequeñas variaciones alrededor del punto de operación para control de posición. Para estas condiciones se muestran la estabilidad del sistema y la respuesta en estado estable para trayectorias del objeto. Para ilustrar las prestaciones del sistema, se presentan los resultados experimentales para un manipulador ASEAIRB6.

  7. Near vision anomalies in Black high school children in Empangeni, South Africa: A pilot study

    Directory of Open Access Journals (Sweden)

    Sam O. Wajuihian

    2014-08-01

    Full Text Available Background: The ability to read efficiently and comfortably is important in the intellectual development and academic performance of a child. Some children experience difficulties when reading due to symptoms related to near vision anomalies. Aim: To explore the feasibility of conducting a large study to determine the prevalence, distribution and characteristics of near vision anomalies in high school children in Empangeni, South Africa. Methods: The study was a cross sectional descriptive pilot study designed to provide preliminary data on prevalence, distribution and characteristics of near vision anomalies in a sample of high school-children in South Africa. Study participants comprised 65 Black children (30 males and 35 females, ages ranged between 13 and 19 years with a mean age and standard deviation of 17 ± 1.43 years. The visual functions evaluated and the techniques used included visual acuity (LogMAR acuity chart, refractive error (autorefractor and subjective refraction, heterophoria (von Graefe, near point of convergence (push-in-to-double, amplitude of accommodation (push-in-to-blur accommodation facility (± 2 D flipper lenses, relative accommodation, accommodation response (monocular estimation method and fusional vergences (step vergence with prism bars. Possible associations between symptoms and near vision anomalies were explored using a 20-point symptoms questionnaire. Results: Prevalence estimates were: Myopia 4.8%, hyperopia 1.6% and astigmatism 1.6%.  For accommodative anomalies, 1.6% had accommodative insufficiency while 1.6% had accommodative infacility. For convergence anomalies, 3.2% had receded near point of convergence, 16% had low suspect convergence insufficiency, no participant had high suspect convergence insufficiency, 1.6% had definite convergence insufficiency and 3.2% had convergence excess. Female participants reported more symptoms than the males and the association between clinical measures and symptoms

  8. Why can't my child see 3D television?

    Science.gov (United States)

    Creavin, Alexandra L; Creavin, Samuel T; Brown, Raymond D; Harrad, Richard A

    2014-08-01

    A child encountering difficulty in watching three-dimensional (3D) stereoscopic displays could have an underlying ocular disorder. It is therefore valuable to understand the differential diagnoses and so conduct an appropriate clinical assessment to address concerns about poor 3D vision.

  9. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  10. Monocular and binocular development in children with albinism, infantile nystagmus syndrome, and normal vision

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.

    2013-01-01

    Abstract Background/aims: To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Methods: Interocular acuity

  11. Monocular deprivation of Fourier phase information boosts the deprived eye's dominance during interocular competition but not interocular phase combination.

    Science.gov (United States)

    Bai, Jianying; Dong, Xue; He, Sheng; Bao, Min

    2017-06-03

    Ocular dominance has been extensively studied, often with the goal to understand neuroplasticity, which is a key characteristic within the critical period. Recent work on monocular deprivation, however, demonstrates residual neuroplasticity in the adult visual cortex. After deprivation of patterned inputs by monocular patching, the patched eye becomes more dominant. Since patching blocks both the Fourier amplitude and phase information of the input image, it remains unclear whether deprivation of the Fourier phase information alone is able to reshape eye dominance. Here, for the first time, we show that removing of the phase regularity without changing the amplitude spectra of the input image induced a shift of eye dominance toward the deprived eye, but only if the eye dominance was measured with a binocular rivalry task rather than an interocular phase combination task. These different results indicate that the two measurements are supported by different mechanisms. Phase integration requires the fusion of monocular images. The fused percept highly relies on the weights of the phase-sensitive monocular neurons that respond to the two monocular images. However, binocular rivalry reflects the result of direct interocular competition that strongly weights the contour information transmitted along each monocular pathway. Monocular phase deprivation may not change the weights in the integration (fusion) mechanism much, but alters the balance in the rivalry (competition) mechanism. Our work suggests that ocular dominance plasticity may occur at different stages of visual processing, and that homeostatic compensation also occurs for the lack of phase regularity in natural scenes. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. In-line 3D print failure detection using computer vision

    DEFF Research Database (Denmark)

    Lyngby, Rasmus Ahrenkiel; Wilm, Jakob; Eiríksson, Eyþór Rúnar

    2017-01-01

    Here we present our findings on a novel real-time vision system that allows for automatic detection of failure conditions that are considered outside of nominal operation. These failure modes include warping, build plate delamination and extrusion failure. Our system consists of a calibrated came...

  13. Effect of monocular deprivation on rabbit neural retinal cell densities

    Directory of Open Access Journals (Sweden)

    Philip Maseghe Mwachaka

    2015-01-01

    Conclusion: In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye.

  14. Monocular and binocular development in children with albinism, infantile nystagmus syndrome and normal vision

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.

    2013-01-01

    Background/aims: To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Methods: Interocular acuity differences and

  15. 3-D Imaging Systems for Agricultural Applications—A Review

    Directory of Open Access Journals (Sweden)

    Manuel Vázquez-Arellano

    2016-04-01

    Full Text Available Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  16. Theoretical Design and First Test in Laboratory of a Composite Visual Servo-Based Target Spray Robotic System

    Directory of Open Access Journals (Sweden)

    Dongjie Zhao

    2016-01-01

    Full Text Available In order to spray onto the canopy of interval planting crop, an approach of using a target spray robot with a composite vision servo system based on monocular scene vision and monocular eye-in-hand vision was proposed. Scene camera was used to roughly locate target crop, and then the image-processing methods for background segmentation, crop canopy centroid extraction, and 3D positioning were studied. Eye-in-hand camera was used to precisely determine spray position of each crop. Based on the center and area of 2D minimum-enclosing-circle (MEC of crop canopy, a method to calculate spray position and spray time was determined. In addition, locating algorithm for the MEC center in nozzle reference frame and the hand-eye calibration matrix were studied. The processing of a mechanical arm guiding nozzle to spray was divided into three stages: reset, alignment, and hovering spray, and servo method of each stage was investigated. For preliminary verification of the theoretical studies on the approach, a simplified experimental prototype containing one spray mechanical arm was built and some performance tests were carried out under controlled environment in laboratory. The results showed that the prototype could achieve the effect of “spraying while moving and accurately spraying on target.”

  17. 3-D Mapping Technologies For High Level Waste Tanks

    International Nuclear Information System (INIS)

    Marzolf, A.; Folsom, M.

    2010-01-01

    This research investigated four techniques that could be applicable for mapping of solids remaining in radioactive waste tanks at the Savannah River Site: stereo vision, LIDAR, flash LIDAR, and Structure from Motion (SfM). Stereo vision is the least appropriate technique for the solids mapping application. Although the equipment cost is low and repackaging would be fairly simple, the algorithms to create a 3D image from stereo vision would require significant further development and may not even be applicable since stereo vision works by finding disparity in feature point locations from the images taken by the cameras. When minimal variation in visual texture exists for an area of interest, it becomes difficult for the software to detect correspondences for that object. SfM appears to be appropriate for solids mapping in waste tanks. However, equipment development would be required for positioning and movement of the camera in the tank space to enable capturing a sequence of images of the scene. Since SfM requires the identification of distinctive features and associates those features to their corresponding instantiations in the other image frames, mockup testing would be required to determine the applicability of SfM technology for mapping of waste in tanks. There may be too few features to track between image frame sequences to employ the SfM technology since uniform appearance may exist when viewing the remaining solids in the interior of the waste tanks. Although scanning LIDAR appears to be an adequate solution, the expense of the equipment ($80,000-$120,000) and the need for further development to allow tank deployment may prohibit utilizing this technology. The development would include repackaging of equipment to permit deployment through the 4-inch access ports and to keep the equipment relatively uncontaminated to allow use in additional tanks. 3D flash LIDAR has a number of advantages over stereo vision, scanning LIDAR, and SfM, including full frame

  18. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... viewer has a problem with focusing or depth perception. Also, the techniques used to create the 3- ... or other conditions that persistently inhibit focusing, depth perception or normal 3-D vision, would have difficulty ...

  19. 3D Data Acquisition Platform for Human Activity Understanding

    Science.gov (United States)

    2016-03-02

    SECURITY CLASSIFICATION OF: In this project, we incorporated motion capture devices, 3D vision sensors, and EMG sensors to cross validate...multimodality data acquisition, and address fundamental research problems of representation and invariant description of 3D data, human motion modeling and...applications of human activity analysis, and computational optimization of large-scale 3D data. The support for the acquisition of such research

  20. Quantum vision in three dimensions

    Science.gov (United States)

    Roth, Yehuda

    We present four models for describing a 3-D vision. Similar to the mirror scenario, our models allow 3-D vision with no need for additional accessories such as stereoscopic glasses or a hologram film. These four models are based on brain interpretation rather than pure objective encryption. We consider the observer "subjective" selection of a measuring device and the corresponding quantum collapse into one of his selected states, as a tool for interpreting reality in according to the observer concepts. This is the basic concept of our study and it is introduced in the first model. Other models suggests "soften" versions that might be much easier to implement. Our quantum interpretation approach contribute to the following fields. In technology the proposed models can be implemented into real devices, allowing 3-D vision without additional accessories. Artificial intelligence: In the desire to create a machine that exchange information by using human terminologies, our interpretation approach seems to be appropriate.

  1. X-ray machine vision and computed tomography

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    This survey examines how 2-D x-ray machine vision and 3-D computed tomography will be used in industry in the 1988-1995 timeframe. Specific applications are described and rank-ordered in importance. The types of companies selling and using 2-D and 3-D systems are profiled, and markets are forecast for 1988 to 1995. It is known that many machine vision and automation companies are now considering entering this field. This report looks at the potential pitfalls and whether recent market problems similar to those recently experienced by the machine vision industry will likely occur in this field. FTS will publish approximately 100 other surveys in 1988 on emerging technology in the fields of AI, manufacturing, computers, sensors, photonics, energy, bioengineering, and materials

  2. Weight prediction of broiler chickens using 3D computer vision

    DEFF Research Database (Denmark)

    Mortensen, Anders Krogh; Lisouski, Pavel; Ahrendt, Peter

    2016-01-01

    a platform weigher which may also include ill birds. In the current study, a fully-automatic 3D camera-based weighing system for broilers have been developed and evaluated in a commercial production environment. Specifically, a low-cost 3D camera (Kinect) that directly returned a depth image was employed...

  3. Computer vision system R&D for EAST Articulated Maintenance Arm robot

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Linglong, E-mail: linglonglin@ipp.ac.cn; Song, Yuntao, E-mail: songyt@ipp.ac.cn; Yang, Yang, E-mail: yangy@ipp.ac.cn; Feng, Hansheng, E-mail: hsfeng@ipp.ac.cn; Cheng, Yong, E-mail: chengyong@ipp.ac.cn; Pan, Hongtao, E-mail: panht@ipp.ac.cn

    2015-11-15

    Highlights: • We discussed the image preprocessing, object detection and pose estimation algorithms under poor light condition of inner vessel of EAST tokamak. • The main pipeline, including contours detection, contours filter, MER extracted, object location and pose estimation, was carried out in detail. • The technical issues encountered during the research were discussed. - Abstract: Experimental Advanced Superconducting Tokamak (EAST) is the first full superconducting tokamak device which was constructed at Institute of Plasma Physics Chinese Academy of Sciences (ASIPP). The EAST Articulated Maintenance Arm (EAMA) robot provides the means of the in-vessel maintenance such as inspection and picking up the fragments of first wall. This paper presents a method to identify and locate the fragments semi-automatically by using the computer vision. The use of computer vision in identification and location faces some difficult challenges such as shadows, poor contrast, low illumination level, less texture and so on. The method developed in this paper enables credible identification of objects with shadows through invariant image and edge detection. The proposed algorithms are validated through our ASIPP robotics and computer vision platform (ARVP). The results show that the method can provide a 3D pose with reference to robot base so that objects with different shapes and size can be picked up successfully.

  4. Linear study and bundle adjustment data fusion; Application to vision localization; Recherche lineaire et fusion de donnees par ajustement de faisceaux; Application a la localisation par vision

    Energy Technology Data Exchange (ETDEWEB)

    Michot, J.

    2010-12-09

    The works presented in this manuscript are in the field of computer vision, and tackle the problem of real-time vision based localization and 3D reconstruction. In this context, the trajectory of a camera and the 3D structure of the filmed scene are initially estimated by linear algorithms and then optimized by a nonlinear algorithm, bundle adjustment. The thesis first presents a new technique of line search, dedicated to the nonlinear minimization algorithms used in Structure-from-Motion. The proposed technique is not iterative and can be quickly installed in traditional bundle adjustment frameworks. This technique, called Global Algebraic Line Search (G-ALS), and its two-dimensional variant (Two way-ALS), accelerate the convergence of the bundle adjustment algorithm. The approximation of the re-projection error by an algebraic distance enables the analytical calculation of an effective displacement amplitude (or two amplitudes for the Two way-ALS variant) by solving a degree 3 (G-ALS) or 5 (Two way-ALS) polynomial. Our experiments, conducted on simulated and real data, show that this amplitude, which is optimal for the algebraic distance, is also efficient for the Euclidean distance and reduces the convergence time of minimizations. One difficulty of real-time tracking algorithms (monocular SLAM) is that the estimated trajectory is often affected by drifts: on the absolute orientation, position and scale. Since these algorithms are incremental, errors and approximations are accumulated throughout the trajectory and cause global drifts. In addition, a tracking vision system can always be dazzled or used under conditions which prevented temporarily to calculate the location of the system. To solve these problems, we propose to use an additional sensor measuring the displacement of the camera. The type of sensor used will vary depending on the targeted application (an odometer for a vehicle, a lightweight inertial navigation system for a person). We propose to

  5. Integration of a 3D perspective view in the navigation display: featuring pilot's mental model

    Science.gov (United States)

    Ebrecht, L.; Schmerwitz, S.

    2015-05-01

    Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.

  6. Stereo using monocular cues within the tensor voting framework.

    Science.gov (United States)

    Mordohai, Philippos; Medioni, Gérard

    2006-06-01

    We address the fundamental problem of matching in two static images. The remaining challenges are related to occlusion and lack of texture. Our approach addresses these difficulties within a perceptual organization framework, considering both binocular and monocular cues. Initially, matching candidates for all pixels are generated by a combination of matching techniques. The matching candidates are then embedded in disparity space, where perceptual organization takes place in 3D neighborhoods and, thus, does not suffer from problems associated with scanline or image neighborhoods. The assumption is that correct matches produce salient, coherent surfaces, while wrong ones do not. Matching candidates that are consistent with the surfaces are kept and grouped into smooth layers. Thus, we achieve surface segmentation based on geometric and not photometric properties. Surface overextensions, which are due to occlusion, can be corrected by removing matches whose projections are not consistent in color with their neighbors of the same surface in both images. Finally, the projections of the refined surfaces on both images are used to obtain disparity hypotheses for unmatched pixels. The final disparities are selected after a second tensor voting stage, during which information is propagated from more reliable pixels to less reliable ones. We present results on widely used benchmark stereo pairs.

  7. Combining 3D structure of real video and synthetic objects

    Science.gov (United States)

    Kim, Man-Bae; Song, Mun-Sup; Kim, Do-Kyoon

    1998-04-01

    This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.

  8. Distributed Monocular SLAM for Indoor Map Building

    OpenAIRE

    Ruwan Egodagamage; Mihran Tuceryan

    2017-01-01

    Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM) is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps,...

  9. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity

    Directory of Open Access Journals (Sweden)

    Taekjun Oh

    2015-07-01

    Full Text Available Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

  10. Prism therapy and visual rehabilitation in homonymous visual field loss.

    LENUS (Irish Health Repository)

    O'Neill, Evelyn C

    2012-02-01

    PURPOSE: Homonymous visual field defects (HVFD) are common and frequently occur after cerebrovascular accidents. They significantly impair visual function and cause disability particularly with regard to visual exploration. The purpose of this study was to assess a novel interventional treatment of monocular prism therapy on visual functioning in patients with HVFD of varied etiology using vision targeted, health-related quality of life (QOL) questionnaires. Our secondary aim was to confirm monocular and binocular visual field expansion pre- and posttreatment. METHODS: Twelve patients with acquired, documented HVFD were eligible to be included. All patients underwent specific vision-targeted, health-related QOL questionnaire and monocular and binocular Goldmann perimetry before commencing prism therapy. Patients were fitted with monocular prisms on the side of the HVFD with the base-in the direction of the field defect creating a peripheral optical exotropia and field expansion. After the treatment period, QOL questionnaires and perimetry were repeated. RESULTS: Twelve patients were included in the treatment group, 10 of whom were included in data analysis. Overall, there was significant improvement within multiple vision-related, QOL functioning parameters, specifically within the domains of general health (p < 0.01), general vision (p < 0.05), distance vision (p < 0.01), peripheral vision (p < 0.05), role difficulties (p < 0.05), dependency (p < 0.05), and social functioning (p < 0.05). Visual field expansion was shown when measured monocularly and binocularly during the study period in comparison with pretreatment baselines. CONCLUSIONS: Patients with HVFD demonstrate decreased QOL. Monocular sector prisms can improve the QOL and expand the visual field in these patients.

  11. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... Patients and Public Technicians and Nurses Senior Ophthalmologists Young ... can be caused by 3-D digital products. However, children (or adults) who have these vision disorders may be more ...

  12. 3D laparoscopic surgery: a prospective clinical trial.

    Science.gov (United States)

    Agrusa, Antonino; Di Buono, Giuseppe; Buscemi, Salvatore; Cucinella, Gaspare; Romano, Giorgio; Gulotta, Gaspare

    2018-04-03

    Since it's introduction, laparoscopic surgery represented a real revolution in clinical practice. The use of a new generation three-dimensional (3D) HD laparoscopic system can be considered a favorable "hybrid" made by combining two different elements: feasibility and diffusion of laparoscopy and improved quality of vision. In this study we report our clinical experience with use of three-dimensional (3D) HD vision system for laparoscopic surgery. Between 2013 and 2017 a prospective cohort study was conducted at the University Hospital of Palermo. We considered 163 patients underwent to laparoscopic three-dimensional (3D) HD surgery for various indications. This 3D-group was compared to a retrospective-prospective control group of patients who underwent the same surgical procedures. Considerating specific surgical procedures there is no significant difference in term of age and gender. The analysis of all the groups of diseases shows that the laparoscopic procedures performed with 3D technology have a shorter mean operative time than comparable 2D procedures when we consider surgery that require complex tasks. The use of 3D laparoscopic technology is an extraordinary innovation in clinical practice, but the instrumentation is still not widespread. Precisely for this reason the studies in literature are few and mainly limited to the evaluation of the surgical skills to the simulator. This study aims to evaluate the actual benefits of the 3D laparoscopic system integrating it in clinical practice. The three-dimensional view allows advanced performance in particular conditions, such as small and deep spaces and promotes performing complex surgical laparoscopic procedures.

  13. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... 3-D digital images. Find an Ophthalmologist Advanced Search Ask an Ophthalmologist Browse Answers Free Newsletter Get ophthalmologist-reviewed tips and information about eye health and preserving your vision. Privacy ...

  14. Comparison of 2D and 3D Vision Gaze with Simultaneous Measurements of Accommodation and Convergence

    OpenAIRE

    Hori, Hiroki; Shiomi, Tomoki; Hasegawa, Satoshi; Takada, Hiroki; Omori, Masako; Matsuura, Yasuyuki; Ishio, Hiromu; Miyao, Masaru

    2014-01-01

    Accommodation and convergence were measured simultaneously while subjects viewed 2D and 3D images. The aim was to compare fixation distances between accommodation and convergence in young subjects while they viewed 2D and 3D images. Measurements were made three times, 40 seconds each, using 2D and 3D images. The result suggests that ocular functions during viewing of 3D images are very similar to those during natural viewing. Previously established and widely used theories, such that within a...

  15. Thalamocortical dynamics of the McCollough effect: boundary-surface alignment through perceptual learning.

    Science.gov (United States)

    Grossberg, Stephen; Hwang, Seungwoo; Mingolla, Ennio

    2002-05-01

    This article further develops the FACADE neural model of 3-D vision and figure-ground perception to quantitatively explain properties of the McCollough effect (ME). The model proposes that many ME data result from visual system mechanisms whose primary function is to adaptively align, through learning, boundary and surface representations that are positionally shifted due to the process of binocular fusion. For example, binocular boundary representations are shifted by binocular fusion relative to monocular surface representations, yet the boundaries must become positionally aligned with the surfaces to control binocular surface capture and filling-in. The model also includes perceptual reset mechanisms that use habituative transmitters in opponent processing circuits. Thus the model shows how ME data may arise from a combination of mechanisms that have a clear functional role in biological vision. Simulation results with a single set of parameters quantitatively fit data from 13 experiments that probe the nature of achromatic/chromatic and monocular/binocular interactions during induction of the ME. The model proposes how perceptual learning, opponent processing, and habituation at both monocular and binocular surface representations are involved, including early thalamocortical sites. In particular, it explains the anomalous ME utilizing these multiple processing sites. Alternative models of the ME are also summarized and compared with the present model.

  16. Characteristics of visual fatigue under the effect of 3D animation.

    Science.gov (United States)

    Chang, Yu-Shuo; Hsueh, Ya-Hsin; Tung, Kwong-Chung; Jhou, Fong-Yi; Lin, David Pei-Cheng

    2015-01-01

    Visual fatigue is commonly encountered in modern life. Clinical visual fatigue characteristics caused by 2-D and 3-D animations may be different, but have not been characterized in detail. This study tried to distinguish the differential effects on visual fatigue caused by 2-D and 3-D animations. A total of 23 volunteers were subjected to accommodation and vergence assessments, followed by a 40-min video game program designed to aggravate their asthenopic symptoms. The volunteers were then assessed for accommodation and vergence parameters again and directed to watch a 5-min 3-D video program, and then assessed again for the parameters. The results support that the 3-D animations caused similar characteristics in vision fatigue parameters in some specific aspects as compared to that caused by 2-D animations. Furthermore, 3-D animations may lead to more exhaustion in both ciliary and extra-ocular muscles, and such differential effects were more evident in the high demand of near vision work. The current results indicated that an arbitrary set of indexes may be promoted in the design of 3-D display or equipments.

  17. Vision based error detection for 3D printing processes

    Directory of Open Access Journals (Sweden)

    Baumann Felix

    2016-01-01

    Full Text Available 3D printers became more popular in the last decade, partly because of the expiration of key patents and the supply of affordable machines. The origin is located in rapid prototyping. With Additive Manufacturing (AM it is possible to create physical objects from 3D model data by layer wise addition of material. Besides professional use for prototyping and low volume manufacturing they are becoming widespread amongst end users starting with the so called Maker Movement. The most prevalent type of consumer grade 3D printers is Fused Deposition Modelling (FDM, also Fused Filament Fabrication FFF. This work focuses on FDM machinery because of their widespread occurrence and large number of open problems like precision and failure. These 3D printers can fail to print objects at a statistical rate depending on the manufacturer and model of the printer. Failures can occur due to misalignment of the print-bed, the print-head, slippage of the motors, warping of the printed material, lack of adhesion or other reasons. The goal of this research is to provide an environment in which these failures can be detected automatically. Direct supervision is inhibited by the recommended placement of FDM printers in separate rooms away from the user due to ventilation issues. The inability to oversee the printing process leads to late or omitted detection of failures. Rejects effect material waste and wasted time thus lowering the utilization of printing resources. Our approach consists of a camera based error detection mechanism that provides a web based interface for remote supervision and early failure detection. Early failure detection can lead to reduced time spent on broken prints, less material wasted and in some cases salvaged objects.

  18. Efficient 3D scene modeling and mosaicing

    CERN Document Server

    Nicosevici, Tudor

    2013-01-01

    This book proposes a complete pipeline for monocular (single camera) based 3D mapping of terrestrial and underwater environments. The aim is to provide a solution to large-scale scene modeling that is both accurate and efficient. To this end, we have developed a novel Structure from Motion algorithm that increases mapping accuracy by registering camera views directly with the maps. The camera registration uses a dual approach that adapts to the type of environment being mapped.   In order to further increase the accuracy of the resulting maps, a new method is presented, allowing detection of images corresponding to the same scene region (crossovers). Crossovers then used in conjunction with global alignment methods in order to highly reduce estimation errors, especially when mapping large areas. Our method is based on Visual Bag of Words paradigm (BoW), offering a more efficient and simpler solution by eliminating the training stage, generally required by state of the art BoW algorithms.   Also, towards dev...

  19. Optics, illumination, and image sensing for machine vision II

    International Nuclear Information System (INIS)

    Svetkoff, D.J.

    1987-01-01

    These proceedings collect papers on the general subject of machine vision. Topics include illumination and viewing systems, x-ray imaging, automatic SMT inspection with x-ray vision, and 3-D sensing for machine vision

  20. Multiple episodes of convergence in genes of the dim light vision pathway in bats.

    Directory of Open Access Journals (Sweden)

    Yong-Yi Shen

    Full Text Available The molecular basis of the evolution of phenotypic characters is very complex and is poorly understood with few examples documenting the roles of multiple genes. Considering that a single gene cannot fully explain the convergence of phenotypic characters, we choose to study the convergent evolution of rod vision in two divergent bats from a network perspective. The Old World fruit bats (Pteropodidae are non-echolocating and have binocular vision, whereas the sheath-tailed bats (Emballonuridae are echolocating and have monocular vision; however, they both have relatively large eyes and rely more on rod vision to find food and navigate in the night. We found that the genes CRX, which plays an essential role in the differentiation of photoreceptor cells, SAG, which is involved in the desensitization of the photoactivated transduction cascade, and the photoreceptor gene RH, which is directly responsible for the perception of dim light, have undergone parallel sequence evolution in two divergent lineages of bats with larger eyes (Pteropodidae and Emballonuroidea. The multiple convergent events in the network of genes essential for rod vision is a rare phenomenon that illustrates the importance of investigating pathways and networks in the evolution of the molecular basis of phenotypic convergence.

  1. Comparative Geometrical Accuracy Investigations of Hand-Held 3d Scanning Systems - AN Update

    Science.gov (United States)

    Kersten, T. P.; Lindstaedt, M.; Starosta, D.

    2018-05-01

    Hand-held 3D scanning systems are increasingly available on the market from several system manufacturers. These systems are deployed for 3D recording of objects with different size in diverse applications, such as industrial reverse engineering, and documentation of museum exhibits etc. Typical measurement distances range from 0.5 m to 4.5 m. Although they are often easy-to-use, the geometric performance of these systems, especially the precision and accuracy, are not well known to many users. First geometrical investigations of a variety of diverse hand-held 3D scanning systems were already carried out by the Photogrammetry & Laser Scanning Lab of the HafenCity University Hamburg (HCU Hamburg) in cooperation with two other universities in 2016. To obtain more information about the accuracy behaviour of the latest generation of hand-held 3D scanning systems, HCU Hamburg conducted further comparative geometrical investigations using structured light systems with speckle pattern (Artec Spider, Mantis Vision PocketScan 3D, Mantis Vision F5-SR, Mantis Vision F5-B, and Mantis Vision F6), and photogrammetric systems (Creaform HandySCAN 700 and Shining FreeScan X7). In the framework of these comparative investigations geometrically stable reference bodies were used. The appropriate reference data was acquired by measurements with two structured light projection systems (AICON smartSCAN and GOM ATOS I 2M). The comprehensive test results of the different test scenarios are presented and critically discussed in this contribution.

  2. VEP-based acuity assessment in low vision.

    Science.gov (United States)

    Hoffmann, Michael B; Brands, Jan; Behrens-Baumann, Wolfgang; Bach, Michael

    2017-12-01

    Objective assessment of visual acuity (VA) is possible with VEP methodology, but established with sufficient precision only for vision better than about 1.0 logMAR. We here explore whether this can be extended down to 2.0 logMAR, highly desirable for low-vision evaluations. Based on the stepwise sweep algorithm (Bach et al. in Br J Ophthalmol 92:396-403, 2008) VEPs to monocular steady-state brief onset pattern stimulation (7.5-Hz checkerboards, 40% contrast, 40 ms on, 93 ms off) were recorded for eight different check sizes, from 0.5° to 9.0°, for two runs with three occipital electrodes in a Laplace-approximating montage. We examined 22 visually normal participants where acuity was reduced to ≈ 2.0 logMAR with frosted transparencies. With the established heuristic algorithm the "VEP acuity" was extracted and compared to psychophysical VA, both obtained at 57 cm distance. In 20 of the 22 participants with artificially reduced acuity the automatic analysis indicated a valid result (1.80 logMAR on average) in at least one of the two runs. 95% test-retest limits of agreement on average were ± 0.09 logMAR for psychophysical, and ± 0.21 logMAR for VEP-derived acuity. For 15 participants we obtained results in both runs and averaged them. In 12 of these 15 the low-acuity results stayed within the 95% confidence interval (± 0.3 logMAR) as established by Bach et al. (2008). The fully automated analysis yielded good agreement of psychophysical and electrophysiological VAs in 12 of 15 cases (80%) in the low-vision range down to 2.0 logMAR. This encourages us to further pursue this methodology and assess its value in patients.

  3. Structured Light-Based 3D Reconstruction System for Plants

    OpenAIRE

    Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud regi...

  4. Automatic Quality Inspection of Percussion Cap Mass Production by Means of 3D Machine Vision and Machine Learning Techniques

    Science.gov (United States)

    Tellaeche, A.; Arana, R.; Ibarguren, A.; Martínez-Otzeta, J. M.

    The exhaustive quality control is becoming very important in the world's globalized market. One of these examples where quality control becomes critical is the percussion cap mass production. These elements must achieve a minimum tolerance deviation in their fabrication. This paper outlines a machine vision development using a 3D camera for the inspection of the whole production of percussion caps. This system presents multiple problems, such as metallic reflections in the percussion caps, high speed movement of the system and mechanical errors and irregularities in percussion cap placement. Due to these problems, it is impossible to solve the problem by traditional image processing methods, and hence, machine learning algorithms have been tested to provide a feasible classification of the possible errors present in the percussion caps.

  5. Image-Based Modeling Techniques for Architectural Heritage 3d Digitalization: Limits and Potentialities

    Science.gov (United States)

    Santagati, C.; Inzerillo, L.; Di Paola, F.

    2013-07-01

    3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.

  6. Coherent laser vision system

    International Nuclear Information System (INIS)

    Sebastion, R.L.

    1995-01-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  7. Coherent laser vision system

    Energy Technology Data Exchange (ETDEWEB)

    Sebastion, R.L. [Coleman Research Corp., Springfield, VA (United States)

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  8. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... issued warnings about children's use of their new products. The original Nintendo warning, in late 2010, urged ... see the images when using 3-D digital products, this may indicate a vision or eye disorder. ...

  9. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... normal 3-D vision in children is stimulated as they use their eyes in day-to-day ... years. However, children who have eye conditions such as amblyopia (an imbalance in visual strength between the ...

  10. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... 6 years from prolonged viewing of the device's digital images, in order to avoid possible damage to ... clearly see the images when using 3-D digital products, this may indicate a vision or eye ...

  11. Vision based persistent localization of a humanoid robot for locomotion tasks

    Directory of Open Access Journals (Sweden)

    Martínez Pablo A.

    2016-09-01

    Full Text Available Typical monocular localization schemes involve a search for matches between reprojected 3D world points and 2D image features in order to estimate the absolute scale transformation between the camera and the world. Successfully calculating such transformation implies the existence of a good number of 3D points uniformly distributed as reprojected pixels around the image plane. This paper presents a method to control the march of a humanoid robot towards directions that are favorable for visual based localization. To this end, orthogonal diagonalization is performed on the covariance matrices of both sets of 3D world points and their 2D image reprojections. Experiments with the NAO humanoid platform show that our method provides persistence of localization, as the robot tends to walk towards directions that are desirable for successful localization. Additional tests demonstrate how the proposed approach can be incorporated into a control scheme that considers reaching a target position.

  12. Viewing geometry determines the contribution of binocular vision to the online control of grasping.

    Science.gov (United States)

    Keefe, Bruce D; Watt, Simon J

    2017-12-01

    Binocular vision is often assumed to make a specific, critical contribution to online visual control of grasping by providing precise information about the separation between digits and object. This account overlooks the 'viewing geometry' typically encountered in grasping, however. Separation of hand and object is rarely aligned precisely with the line of sight (the visual depth dimension), and analysis of the raw signals suggests that, for most other viewing angles, binocular feedback is less precise than monocular feedback. Thus, online grasp control relying selectively on binocular feedback would not be robust to natural changes in viewing geometry. Alternatively, sensory integration theory suggests that different signals contribute according to their relative precision, in which case the role of binocular feedback should depend on viewing geometry, rather than being 'hard-wired'. We manipulated viewing geometry, and assessed the role of binocular feedback by measuring the effects on grasping of occluding one eye at movement onset. Loss of binocular feedback resulted in a significantly less extended final slow-movement phase when hand and object were separated primarily in the frontoparallel plane (where binocular information is relatively imprecise), compared to when they were separated primarily along the line of sight (where binocular information is relatively precise). Consistent with sensory integration theory, this suggests the role of binocular (and monocular) vision in online grasp control is not a fixed, 'architectural' property of the visuo-motor system, but arises instead from the interaction of viewer and situation, allowing robust online control across natural variations in viewing geometry.

  13. Vision in avian emberizid foragers: maximizing both binocular vision and fronto-lateral visual acuity.

    Science.gov (United States)

    Moore, Bret A; Pita, Diana; Tyrrell, Luke P; Fernández-Juricic, Esteban

    2015-05-01

    Avian species vary in their visual system configuration, but previous studies have often compared single visual traits between two to three distantly related species. However, birds use different visual dimensions that cannot be maximized simultaneously to meet different perceptual demands, potentially leading to trade-offs between visual traits. We studied the degree of inter-specific variation in multiple visual traits related to foraging and anti-predator behaviors in nine species of closely related emberizid sparrows, controlling for phylogenetic effects. Emberizid sparrows maximize binocular vision, even seeing their bill tips in some eye positions, which may enhance the detection of prey and facilitate food handling. Sparrows have a single retinal center of acute vision (i.e. fovea) projecting fronto-laterally (but not into the binocular field). The foveal projection close to the edge of the binocular field may shorten the time to gather and process both monocular and binocular visual information from the foraging substrate. Contrary to previous work, we found that species with larger visual fields had higher visual acuity, which may compensate for larger blind spots (i.e. pectens) above the center of acute vision, enhancing predator detection. Finally, species with a steeper change in ganglion cell density across the retina had higher eye movement amplitude, probably due to a more pronounced reduction in visual resolution away from the fovea, which would need to be moved around more frequently. The visual configuration of emberizid passive prey foragers is substantially different from that of previously studied avian groups (e.g. sit-and-wait and tactile foragers). © 2015. Published by The Company of Biologists Ltd.

  14. Three dimensional monocular human motion analysis in end-effector space

    DEFF Research Database (Denmark)

    Hauberg, Søren; Lapuyade, Jerome; Engell-Nørregård, Morten Pol

    2009-01-01

    In this paper, we present a novel approach to three dimensional human motion estimation from monocular video data. We employ a particle filter to perform the motion estimation. The novelty of the method lies in the choice of state space for the particle filter. Using a non-linear inverse kinemati...

  15. [The influence of IOL implantation on visual acuity, contrast sensitivity and colour vision 2 and 4 months after cataract surgery].

    Science.gov (United States)

    Ventruba, J

    2006-04-01

    To assess the change in visual acuity, contrast sensitivity and colour vision in relation to the time after cataract surgery and to the type of implanted IOL, and to compare visual functions by patients with one and two pseudophakic eyes. 45 cataract patients were examined before and then 2 and 4 month after the cataract surgery. Visual acuity (VA) was tested on logMAR optotype chart with Landolt rings, contrast sensitivity (CS) was tested on the Pelli-Robson chart and the SWCT chart. For colour vision (CV) testing, the standard Farnsworth D-15 test and the desaturated Lanthony D-15 test were used. The patients were divided into two groups--a group with one pseudophakic eye and a group with two pseudophakic eyes, and also according to the type of IOL--PMMA or hydrophobic acrylate that had been implanted. Control group was composed of phakic subjects with no ocular pathology. After the cataract surgery, in both groups there was a significant improvement in monocular and binocular VA (p test (p test (p tested by means of psychophysical methods of VA, CS and CV significantly improve and are stable 2 month after the surgery. The second eye surgery improves binocular visual functions the level of which doesn't differ from that of normal phakic subjects. There was no influence of the type of IOL on final state of VA, CS or CV.

  16. Stereopsis and 3D surface perception by spiking neurons in laminar cortical circuits: a method for converting neural rate models into spiking models.

    Science.gov (United States)

    Cao, Yongqiang; Grossberg, Stephen

    2012-02-01

    A laminar cortical model of stereopsis and 3D surface perception is developed and simulated. The model shows how spiking neurons that interact in hierarchically organized laminar circuits of the visual cortex can generate analog properties of 3D visual percepts. The model describes how monocular and binocular oriented filtering interact with later stages of 3D boundary formation and surface filling-in in the LGN and cortical areas V1, V2, and V4. It proposes how interactions between layers 4, 3B, and 2/3 in V1 and V2 contribute to stereopsis, and how binocular and monocular information combine to form 3D boundary and surface representations. The model suggests how surface-to-boundary feedback from V2 thin stripes to pale stripes helps to explain how computationally complementary boundary and surface formation properties lead to a single consistent percept, eliminate redundant 3D boundaries, and trigger figure-ground perception. The model also shows how false binocular boundary matches may be eliminated by Gestalt grouping properties. In particular, the disparity filter, which helps to solve the correspondence problem by eliminating false matches, is realized using inhibitory interneurons as part of the perceptual grouping process by horizontal connections in layer 2/3 of cortical area V2. The 3D sLAMINART model simulates 3D surface percepts that are consciously seen in 18 psychophysical experiments. These percepts include contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, Panum's limiting case, the Venetian blind illusion, stereopsis with polarity-reversed stereograms, da Vinci stereopsis, and perceptual closure. The model hereby illustrates a general method of unlumping rate-based models that use the membrane equations of neurophysiology into models that use spiking neurons, and which may be embodied in VLSI chips that use spiking neurons to minimize heat production. Copyright

  17. Comparative evaluation of HD 2D/3D laparoscopic monitors and benchmarking to a theoretically ideal 3D pseudodisplay: even well-experienced laparoscopists perform better with 3D.

    Science.gov (United States)

    Wilhelm, D; Reiser, S; Kohn, N; Witte, M; Leiner, U; Mühlbach, L; Ruschin, D; Reiner, W; Feussner, H

    2014-08-01

    Though theoretically superior to standard 2D visualization, 3D video systems have not yet achieved a breakthrough in laparoscopy. The latest 3D monitors, including autostereoscopic displays and high-definition (HD) resolution, are designed to overcome the existing limitations. We performed a randomized study on 48 individuals with different experience levels in laparoscopy. Three different 3D displays (glasses-based 3D monitor, autostereoscopic display, and a mirror-based theoretically ideal 3D display) were compared to a 2D HD display by assessing multiple performance and mental workload parameters and rating the subjects during a laparoscopic suturing task. Electromagnetic tracking provided information on the instruments' pathlength, movement velocity, and economy. The usability, the perception of visual discomfort, and the quality of image transmission of each monitor were subjectively rated. Almost all performance parameters were superior with the conventional glasses-based 3D display compared to the 2D display and the autostereoscopic display, but were often significantly exceeded by the mirror-based 3D display. Subjects performed a task faster and with greater precision when visualization was achieved with the 3D and the mirror-based display. Instrument pathlength was shortened by improved depth perception. Workload parameters (NASA TLX) did not show significant differences. Test persons complained of impaired vision while using the autostereoscopic monitor. The 3D and 2D displays were rated user-friendly and applicable in daily work. Experienced and inexperienced laparoscopists profited equally from using a 3D display, with an improvement in task performance about 20%. Novel 3D displays improve laparoscopic interventions as a result of faster performance and higher precision without causing a higher mental workload. Therefore, they have the potential to significantly impact the further development of minimally invasive surgery. However, as shown by the

  18. 3D laser imaging for ODOT interstate network at true 1-mm resolution.

    Science.gov (United States)

    2014-12-01

    With the development of 3D laser imaging technology, the latest iteration of : PaveVision3D Ultra can obtain true 1mm resolution 3D data at full-lane coverage in all : three directions at highway speed up to 60MPH. This project provides rapid survey ...

  19. Vision-Based SLAM System for Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-03-01

    Full Text Available The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs. The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i an orientation sensor (AHRS; (ii a position sensor (GPS; and (iii a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  20. Vision-Based SLAM System for Unmanned Aerial Vehicles.

    Science.gov (United States)

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-03-15

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  1. Fast detection and modeling of human-body parts from monocular video

    NARCIS (Netherlands)

    Lao, W.; Han, Jungong; With, de P.H.N.; Perales, F.J.; Fisher, R.B.

    2009-01-01

    This paper presents a novel and fast scheme to detect different body parts in human motion. Using monocular video sequences, trajectory estimation and body modeling of moving humans are combined in a co-operating processing architecture. More specifically, for every individual person, features of

  2. 3D Printing: Print the future of ophthalmology.

    Science.gov (United States)

    Huang, Wenbin; Zhang, Xiulan

    2014-08-26

    The three-dimensional (3D) printer is a new technology that creates physical objects from digital files. Recent technological advances in 3D printing have resulted in increased use of this technology in the medical field, where it is beginning to revolutionize medical and surgical possibilities. It is already providing medicine with powerful tools that facilitate education, surgical planning, and organ transplantation research. A good understanding of this technology will be beneficial to ophthalmologists. The potential applications of 3D printing in ophthalmology, both current and future, are explored in this article. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  3. IMAGE-BASED MODELING TECHNIQUES FOR ARCHITECTURAL HERITAGE 3D DIGITALIZATION: LIMITS AND POTENTIALITIES

    Directory of Open Access Journals (Sweden)

    C. Santagati

    2013-07-01

    Full Text Available 3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS, the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases to large scale buildings for practitioner purpose.

  4. Evaluating the effect of three-dimensional visualization on force application and performance time during robotics-assisted mitral valve repair.

    Science.gov (United States)

    Currie, Maria E; Trejos, Ana Luisa; Rayman, Reiza; Chu, Michael W A; Patel, Rajni; Peters, Terry; Kiaii, Bob B

    2013-01-01

    The purpose of this study was to determine the effect of three-dimensional (3D) binocular, stereoscopic, and two-dimensional (2D) monocular visualization on robotics-assisted mitral valve annuloplasty versus conventional techniques in an ex vivo animal model. In addition, we sought to determine whether these effects were consistent between novices and experts in robotics-assisted cardiac surgery. A cardiac surgery test-bed was constructed to measure forces applied during mitral valve annuloplasty. Sutures were passed through the porcine mitral valve annulus by the participants with different levels of experience in robotics-assisted surgery and tied in place using both robotics-assisted and conventional surgery techniques. The mean time for both the experts and the novices using 3D visualization was significantly less than that required using 2D vision (P robotic system with either 2D or 3D vision (P robotics-assisted mitral valve annuloplasty than during conventional open mitral valve annuloplasty. This finding suggests that 3D visualization does not fully compensate for the absence of haptic feedback in robotics-assisted cardiac surgery.

  5. UAV and Computer Vision in 3D Modeling of Cultural Heritage in Southern Italy

    Science.gov (United States)

    Barrile, Vincenzo; Gelsomino, Vincenzo; Bilotta, Giuliana

    2017-08-01

    On the Waterfront Italo Falcomatà of Reggio Calabria you can admire the most extensive tract of the walls of the Hellenistic period of ancient city of Rhegion. The so-called Greek Walls are one of the most significant and visible traces of the past linked to the culture of Ancient Greece in the site of Reggio Calabria territory. Over the years this stretch of wall has always been a part, to the reconstruction of Reggio after the earthquake of 1783, the outer walls at all times, restored countless times, to cope with the degradation of the time and the adjustments to the technical increasingly innovative and sophisticated siege. They were the subject of several studies on history, for the study of the construction techniques and the maintenance and restoration of the same. This note describes the methodology for the implementation of a three-dimensional model of the Greek Walls conducted by the Geomatics Laboratory, belonging to DICEAM Department of University “Mediterranea” of Reggio Calabria. 3D modeling we made is based on imaging techniques, such as Digital Photogrammetry and Computer Vision, by using a drone. The acquired digital images were then processed using commercial software Agisoft PhotoScan. The results denote the goodness of the technique used in the field of cultural heritage, attractive alternative to more expensive and demanding techniques such as laser scanning.

  6. New development in robot vision

    CERN Document Server

    Behal, Aman; Chung, Chi-Kit

    2015-01-01

    The field of robotic vision has advanced dramatically recently with the development of new range sensors.  Tremendous progress has been made resulting in significant impact on areas such as robotic navigation, scene/environment understanding, and visual learning. This edited book provides a solid and diversified reference source for some of the most recent important advancements in the field of robotic vision. The book starts with articles that describe new techniques to understand scenes from 2D/3D data such as estimation of planar structures, recognition of multiple objects in the scene using different kinds of features as well as their spatial and semantic relationships, generation of 3D object models, approach to recognize partially occluded objects, etc. Novel techniques are introduced to improve 3D perception accuracy with other sensors such as a gyroscope, positioning accuracy with a visual servoing based alignment strategy for microassembly, and increasing object recognition reliability using related...

  7. Bilateral implantation of +3.0 D multifocal toric intraocular lenses: results of a US Food and Drug Administration clinical trial.

    Science.gov (United States)

    Lehmann, Robert; Modi, Satish; Fisher, Bret; Michna, Magda; Snyder, Michael

    2017-01-01

    The purpose of this study was to evaluate the clinical outcomes of apodized diffractive +3.0 D multifocal toric intraocular lens (IOL) implantations in subjects with preoperative corneal astigmatism. This was a prospective cohort study conducted at 21 US sites. The study population consisted of 574 subjects, aged ≥21 years, with preoperative astigmatism 0.75-2.82 D, and potential postoperative visual acuity (VA) ≥0.2 logMAR, undergoing bilateral cataract removal by phacoemulsification. The intervention was bilateral implantation of aspheric apodized diffractive +3.0 D multifocal toric or spherical multifocal nontoric IOLs. The main outcome measures were monocular uncorrected near and distance VA and safety at 12 months. A total of 373/386 and 182/188 subjects implanted with multifocal toric and nontoric IOLs, respectively, completed 12-month follow-up after the second implantation. Toric IOLs were nonin-ferior in monocular uncorrected distance (4 m) and near (40 cm) VA but had >1 line better binocular uncorrected intermediate VA (50, 60, and 70 cm) than nontoric IOLs. Toric IOLs reduced cylinder to within 0.50 D and 1.0 D of target in 278 (74.5%) and 351 (94.1%) subjects, respectively. Mean ± standard deviation (SD) differences between intended and achieved axis orientation in the first and second implanted eyes were 5.0°±6.1° and 4.7°±4.0°, respectively. Mean ± SD 12-month IOL rotations in the first and second implanted eyes were 2.7°±5.8° and 2.2°±2.7°, respectively. No subject receiving toric IOLs required secondary surgical intervention due to optical lens properties. Multifocal toric IOLs were noninferior to multifocal nontoric IOLs in uncorrected distance and near VAs in subjects with preexisting corneal astigmatism and effectively corrected astigmatism of 0.75-2.82 D.

  8. 3D panorama stereo visual perception centering on the observers

    International Nuclear Information System (INIS)

    Tang, YiPing; Zhou, Jingkai; Xu, Haitao; Xiang, Yun

    2015-01-01

    For existing three-dimensional (3D) laser scanners, acquiring geometry and color information of the objects simultaneously is difficult. Moreover, the current techniques cannot store, modify, and model the point clouds efficiently. In this work, we have developed a novel sensor system, which is called active stereo omni-directional vision sensor (ASODVS), to address those problems. ASODVS is an integrated system composed of a single-view omni-directional vision sensor and a mobile planar green laser generator platform. Driven by a stepper motor, the laser platform can move vertically along the axis of the ASODVS. During the scanning of the laser generators, the panoramic images of the environment are captured and the characteristics and space location information of the laser points are calculated accordingly. Based on the image information of the laser points, the 3D space can be reconstructed. Experimental results demonstrate that the proposed ASODVS system can measure and reconstruct the 3D space in real-time and with high quality. (paper)

  9. Position estimation and driving of an autonomous vehicle by monocular vision

    Science.gov (United States)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  10. DEVELOPMENT OF 2D HUMAN BODY MODELING USING THINNING ALGORITHM

    Directory of Open Access Journals (Sweden)

    K. Srinivasan

    2010-11-01

    Full Text Available Monitoring the behavior and activities of people in Video surveillance has gained more applications in Computer vision. This paper proposes a new approach to model the human body in 2D view for the activity analysis using Thinning algorithm. The first step of this work is Background subtraction which is achieved by the frame differencing algorithm. Thinning algorithm has been used to find the skeleton of the human body. After thinning, the thirteen feature points like terminating points, intersecting points, shoulder, elbow, and knee points have been extracted. Here, this research work attempts to represent the body model in three different ways such as Stick figure model, Patch model and Rectangle body model. The activities of humans have been analyzed with the help of 2D model for the pre-defined poses from the monocular video data. Finally, the time consumption and efficiency of our proposed algorithm have been evaluated.

  11. An experiment of a 3D real-time robust visual odometry for intelligent vehicles

    OpenAIRE

    Rodriguez Florez , Sergio Alberto; Fremont , Vincent; Bonnifait , Philippe

    2009-01-01

    International audience; Vision systems are nowadays very promising for many on-board vehicles perception functionalities, like obstacles detection/recognition and ego-localization. In this paper, we present a 3D visual odometric method that uses a stereo-vision system to estimate the 3D ego-motion of a vehicle in outdoor road conditions. In order to run in real-time, the studied technique is sparse meaning that it makes use of feature points that are tracked during several frames. A robust sc...

  12. Review of 3d GIS Data Fusion Methods and Progress

    Science.gov (United States)

    Hua, Wei; Hou, Miaole; Hu, Yungang

    2018-04-01

    3D data fusion is a research hotspot in the field of computer vision and fine mapping, and plays an important role in fine measurement, risk monitoring, data display and other processes. At present, the research of 3D data fusion in the field of Surveying and mapping focuses on the 3D model fusion of terrain and ground objects. This paper summarizes the basic methods of 3D data fusion of terrain and ground objects in recent years, and classified the data structure and the establishment method of 3D model, and some of the most widely used fusion methods are analysed and commented.

  13. REVIEW OF 3D GIS DATA FUSION METHODS AND PROGRESS

    Directory of Open Access Journals (Sweden)

    W. Hua

    2018-04-01

    Full Text Available 3D data fusion is a research hotspot in the field of computer vision and fine mapping, and plays an important role in fine measurement, risk monitoring, data display and other processes. At present, the research of 3D data fusion in the field of Surveying and mapping focuses on the 3D model fusion of terrain and ground objects. This paper summarizes the basic methods of 3D data fusion of terrain and ground objects in recent years, and classified the data structure and the establishment method of 3D model, and some of the most widely used fusion methods are analysed and commented.

  14. Fault-Tolerant Vision for Vehicle Guidance in Agriculture

    DEFF Research Database (Denmark)

    Blas, Morten Rufus

    , and aiding sensors such as GPS provide means to detect and isolate single faults in the system. In addition, learning is employed to adapt the system to variational changes in the natural environment. 3D vision is enhanced by learning texture and color information. Intensity gradients on small neighborhoods...... dropout of 3D vision, faults in classification, or other defects, redundant information should be utilized. Such information can be used to diagnose faulty behavior and to temporarily continue operation with a reduced set of sensors when faults or artifacts occur. Additional sensors include GPS receivers...... and inertial sensors. To fully utilize the possibilities in 3D vision, the system must also be able to learn and adapt to changing environments. By learning features of the environment new diagnostic relations can be generated by creating redundant feed-forward information about crop location. Also, by mapping...

  15. Random-Profiles-Based 3D Face Recognition System

    Directory of Open Access Journals (Sweden)

    Joongrock Kim

    2014-03-01

    Full Text Available In this paper, a noble nonintrusive three-dimensional (3D face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.

  16. Research on robot navigation vision sensor based on grating projection stereo vision

    Science.gov (United States)

    Zhang, Xiaoling; Luo, Yinsheng; Lin, Yuchi; Zhu, Lei

    2016-10-01

    A novel visual navigation method based on grating projection stereo vision for mobile robot in dark environment is proposed. This method is combining with grating projection profilometry of plane structured light and stereo vision technology. It can be employed to realize obstacle detection, SLAM (Simultaneous Localization and Mapping) and vision odometry for mobile robot navigation in dark environment without the image match in stereo vision technology and without phase unwrapping in the grating projection profilometry. First, we research the new vision sensor theoretical, and build geometric and mathematical model of the grating projection stereo vision system. Second, the computational method of 3D coordinates of space obstacle in the robot's visual field is studied, and then the obstacles in the field is located accurately. The result of simulation experiment and analysis shows that this research is useful to break the current autonomous navigation problem of mobile robot in dark environment, and to provide the theoretical basis and exploration direction for further study on navigation of space exploring robot in the dark and without GPS environment.

  17. Amblyopia and Binocular Vision

    OpenAIRE

    Birch, Eileen E.

    2012-01-01

    Amblyopia is the most common cause of monocular visual loss in children, affecting 1.3% to 3.6% of children. Current treatments are effective in reducing the visual acuity deficit but many amblyopic individuals are left with residual visual acuity deficits, ocular motor abnormalities, deficient fine motor skills, and risk for recurrent amblyopia. Using a combination of psychophysical, electrophysiological, imaging, risk factor analysis, and fine motor skill assessment, the primary role of bin...

  18. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  19. 3D Reconstruction of NMR Images by LabVIEW

    Directory of Open Access Journals (Sweden)

    Peter IZAK

    2007-01-01

    Full Text Available This paper introduces the experiment of 3D reconstruction NMR images via virtual instrumentation - LabVIEW. The main idea is based on marching cubes algorithm and image processing implemented by module of Vision assistant. The two dimensional images shot by the magnetic resonance device provide information about the surface properties of human body. There is implemented algorithm which can be used for 3D reconstruction of magnetic resonance images in biomedical application.

  20. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments.

    Science.gov (United States)

    Trujillo, Juan-Carlos; Munguia, Rodrigo; Guerra, Edmundo; Grau, Antoni

    2018-04-26

    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.

  1. Optoelectronic instrumentation enhancement using data mining feedback for a 3D measurement system

    Science.gov (United States)

    Flores-Fuentes, Wendy; Sergiyenko, Oleg; Gonzalez-Navarro, Félix F.; Rivas-López, Moisés; Hernandez-Balbuena, Daniel; Rodríguez-Quiñonez, Julio C.; Tyrsa, Vera; Lindner, Lars

    2016-12-01

    3D measurement by a cyber-physical system based on optoelectronic scanning instrumentation has been enhanced by outliers and regression data mining feedback. The prototype has applications in (1) industrial manufacturing systems that include: robotic machinery, embedded vision, and motion control, (2) health care systems for measurement scanning, and (3) infrastructure by providing structural health monitoring. This paper presents new research performed in data processing of a 3D measurement vision sensing database. Outliers from multivariate data have been detected and removal to improve artificial intelligence regression algorithm results. Physical measurement error regression data has been used for 3D measurements error correction. Concluding, that the joint of physical phenomena, measurement and computation is an effectiveness action for feedback loops in the control of industrial, medical and civil tasks.

  2. Cirurgia monocular para esotropias de grande ângulo: um novo paradigma Monocular surgery for large-angle esotropias: a new paradigm

    Directory of Open Access Journals (Sweden)

    Edmilson Gigante

    2009-02-01

    Full Text Available OBJETIVO: Demonstrar a viabilidade da cirurgia monocular no tratamento das esotropias de grande ângulo, praticando-se amplos recuos do reto medial (6 a 10 mm e grandes ressecções do reto lateral (8 a 10 mm. MÉTODOS: Foram operados, com anestesia geral e sem reajustes per ou pósoperatórios, 46 pacientes com esotropias de 50δ ou mais, relativamente comitantes. Os métodos utilizados para refratometria, medida da acuidade visual e do ângulo de desvio, foram os, tradicionalmente, utilizados em estrabologia. No pós-operatório, além das medidas na posição primária do olhar, foi feita uma avaliação da motilidade do olho operado, em adução e em abdução. RESULTADOS: Foram considerados quatro grupos de estudo, correspondendo a quatro períodos de tempo: uma semana, seis meses, dois anos e quatro a sete anos. Os resultados para o ângulo de desvio pós-cirúrgico foram compatíveis com os da literatura em geral e mantiveram-se estáveis ao longo do tempo. A motilidade do olho operado apresentou pequena limitação em adução e nenhuma em abdução, contrariando o encontrado na literatura estrabológica. Comparando os resultados de adultos com os de crianças e de amblíopes com não amblíopes, não foram encontradas diferenças estatisticamente significativas entre eles. CONCLUSÃO:Em face dos resultados encontrados, entende-se ser possível afirmar que a cirurgia monocular de recuo-ressecção pode ser considerada opção viável para o tratamento das esotropias de grande ângulo, tanto para adultos quanto para crianças, bem como para amblíopes e não amblíopes.PURPOSE: To demonstrate the feasibility of monocular surgery in the treatment of large-angle esotropias through large recessions of the medial rectus (6 to 10 mm and large resections of the lateral rectus (8 to 10 mm. METHODS: 46 patients were submitted to surgery. They had esotropias of 50Δor more that were relatively comitant. The patients were operated under general

  3. Définition et révision d'une stratégie de développement industriel

    OpenAIRE

    Choffray, Jean-Marie; Wagner, Philippe

    1983-01-01

    L'objet de cet article est de présenter une approche nouvelle de définition et de révision de la stratégie d'une entreprise, reposant sur l'utilisation de l'Analyse des Processus Hiérarchiques. Nous présentons le modèle permettant de mesurer les priorités à établir entre les différents objectifs et actions possibles à chaque niveau de la hiérarchie. Peer reviewed

  4. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... development. Should parents be concerned? If a healthy child consistently develops headaches or tired eyes or cannot clearly see the images when using 3-D digital products, this may indicate a vision or eye ... that the child be given a comprehensive exam by an ophthalmologist. ...

  5. Industrial vision

    DEFF Research Database (Denmark)

    Knudsen, Ole

    1998-01-01

    This dissertation is concerned with the introduction of vision-based application s in the ship building industry. The industrial research project is divided into a natural seq uence of developments, from basic theoretical projective image generation via CAD and subpixel analysis to a description...... is present ed, and the variability of the parameters is examined and described. The concept of using CAD together with vision information is based on the fact that all items processed at OSS have an associated complete 3D CAD model that is accessible at all production states. This concept gives numerous...... possibilities for using vision in applications which otherwise would be very difficult to automate. The requirement for low tolerances in production is, despite the huge dimensions of the items involved, extreme. This fact makes great demands on the ability to do robust sub pixel estimation. A new method based...

  6. Global Value Chains from a 3D Printing Perspective

    DEFF Research Database (Denmark)

    Laplume, André O; Petersen, Bent; Pearce, Joshua M.

    2016-01-01

    This article outlines the evolution of additive manufacturing technology, culminating in 3D printing and presents a vision of how this evolution is affecting existing global value chains (GVCs) in production. In particular, we bring up questions about how this new technology can affect...... of whether in some industries diffusion of 3D printing technologies may change the role of multinational enterprises as coordinators of GVCs by inducing the engagement of a wider variety of firms, even households....

  7. Layer 2/3 synapses in monocular and binocular regions of tree shrew visual cortex express mAChR-dependent long-term depression and long-term potentiation.

    Science.gov (United States)

    McCoy, Portia; Norton, Thomas T; McMahon, Lori L

    2008-07-01

    Acetylcholine is an important modulator of synaptic efficacy and is required for learning and memory tasks involving the visual cortex. In rodent visual cortex, activation of muscarinic acetylcholine receptors (mAChRs) induces a persistent long-term depression (LTD) of transmission at synapses recorded in layer 2/3 of acute slices. Although the rodent studies expand our knowledge of how the cholinergic system modulates synaptic function underlying learning and memory, they are not easily extrapolated to more complex visual systems. Here we used tree shrews for their similarities to primates, including a visual cortex with separate, defined regions of monocular and binocular innervation, to determine whether mAChR activation induces long-term plasticity. We find that the cholinergic agonist carbachol (CCh) not only induces long-term plasticity, but the direction of the plasticity depends on the subregion. In the monocular region, CCh application induces LTD of the postsynaptic potential recorded in layer 2/3 that requires activation of m3 mAChRs and a signaling cascade that includes activation of extracellular signal-regulated kinase (ERK) 1/2. In contrast, layer 2/3 postsynaptic potentials recorded in the binocular region express long-term potentiation (LTP) following CCh application that requires activation of m1 mAChRs and phospholipase C. Our results show that activation of mAChRs induces long-term plasticity at excitatory synapses in tree shrew visual cortex. However, depending on the ocular inputs to that region, variation exists as to the direction of plasticity, as well as to the specific mAChR and signaling mechanisms that are required.

  8. Manifolds for pose tracking from monocular video

    Science.gov (United States)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2015-03-01

    We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).

  9. 3D display considerations for rugged airborne environments

    Science.gov (United States)

    Barnidge, Tracy J.; Tchon, Joseph L.

    2015-05-01

    The KC-46 is the next generation, multi-role, aerial refueling tanker aircraft being developed by Boeing for the United States Air Force. Rockwell Collins has developed the Remote Vision System (RVS) that supports aerial refueling operations under a variety of conditions. The system utilizes large-area, high-resolution 3D displays linked with remote sensors to enhance the operator's visual acuity for precise aerial refueling control. This paper reviews the design considerations, trade-offs, and other factors related to the selection and ruggedization of the 3D display technology for this military application.

  10. Network level pavement evaluation with 1 mm 3D survey system

    Directory of Open Access Journals (Sweden)

    Kelvin C.P. Wang

    2015-12-01

    Full Text Available The latest iteration of PaveVision3D Ultra can obtain true 1 mm resolution 3D data at full-lane coverage in all 3 directions at highway speed up to 60 mph. This paper introduces the PaveVision3D Ultra technology for rapid network level pavement survey on approximately 1280 center miles of Oklahoma interstate highways. With sophisticated automated distress analyzer (ADA software interface, the collected 1 mm 3D data provide Oklahoma Department of Transportation (ODOT with comprehensive solutions for automated evaluation of pavement surface including longitudinal profile for roughness, transverse profile for rutting, predicted hydroplaning speed for safety analysis, and cracking and various surface defects for distresses. The pruned exact linear time (PELT method, an optimal partitioning algorithm, is implemented to identify change points and dynamically determine homogeneous segments so as to assist ODOT effectively using the available 1 mm 3D pavement surface condition data for decision-making. The application of 1 mm 3D laser imaging technology for network survey is unprecedented. This innovative technology allows highway agencies to access its options in using the 1 mm 3D system for its design and management purposes, particularly to meet the data needs for pavement management system (PMS, pavement ME design and highway performance monitoring system (HPMS.

  11. 3D documenatation of the petalaindera: digital heritage preservation methods using 3D laser scanner and photogrammetry

    Science.gov (United States)

    Sharif, Harlina Md; Hazumi, Hazman; Hafizuddin Meli, Rafiq

    2018-01-01

    3D imaging technologies have undergone massive revolution in recent years. Despite this rapid development, documentation of 3D cultural assets in Malaysia is still very much reliant upon conventional techniques such as measured drawings and manual photogrammetry. There is very little progress towards exploring new methods or advanced technologies to convert 3D cultural assets into 3D visual representation and visualization models that are easily accessible for information sharing. In recent years, however, the advent of computer vision (CV) algorithms make it possible to reconstruct 3D geometry of objects by using image sequences from digital cameras, which are then processed by web services and freeware applications. This paper presents a completed stage of an exploratory study that investigates the potentials of using CV automated image-based open-source software and web services to reconstruct and replicate cultural assets. By selecting an intricate wooden boat, Petalaindera, this study attempts to evaluate the efficiency of CV systems and compare it with the application of 3D laser scanning, which is known for its accuracy, efficiency and high cost. The final aim of this study is to compare the visual accuracy of 3D models generated by CV system, and 3D models produced by 3D scanning and manual photogrammetry for an intricate subject such as the Petalaindera. The final objective is to explore cost-effective methods that could provide fundamental guidelines on the best practice approach for digital heritage in Malaysia.

  12. LASIK monocular en pacientes adultos con ambliopía por anisometropía

    Directory of Open Access Journals (Sweden)

    Alejandro Tamez-Peña

    2017-09-01

    Conclusiones: La cirugía refractiva monocular en pacientes con ambliopía por anisometropía es una opción terapéutica segura y efectiva que ofrece resultados visuales satisfactorios, preservando o incluso mejorando la AVMC preoperatoria.

  13. Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera

    National Research Council Canada - National Science Library

    Chen, J; Dixon, W. E; Dawson, D. M; Chitrakaran, V. K

    2004-01-01

    In this paper, a visual servo tracking controller for a wheeled mobile robot (WMR) is developed that utilizes feedback from a monocular camera system that is mounted with a fixed position and orientation...

  14. Vision-based building energy diagnostics and retrofit analysis using 3D thermography and building information modeling

    Science.gov (United States)

    Ham, Youngjib

    localization issues of 2D thermal image-based inspection, a new computer vision-based method is presented for automated 3D spatio-thermal modeling of building environments from images and localizing the thermal images into the 3D reconstructed scenes, which helps better characterize the as-is condition of existing buildings in 3D. By using these models, auditors can conduct virtual walk-through in buildings and explore the as-is condition of building geometry and the associated thermal conditions in 3D. Second, to address the challenges in qualitative and subjective interpretation of visual data, a new model-based method is presented to convert the 3D thermal profiles of building environments into their associated energy performance metrics. More specifically, the Energy Performance Augmented Reality (EPAR) models are formed which integrate the actual 3D spatio-thermal models ('as-is') with energy performance benchmarks ('as-designed') in 3D. In the EPAR models, the presence and location of potential energy problems in building environments are inferred based on performance deviations. The as-is thermal resistances of the building assemblies are also calculated at the level of mesh vertex in 3D. Then, based on the historical weather data reflecting energy load for space conditioning, the amount of heat transfer that can be saved by improving the as-is thermal resistances of the defective areas to the recommended level is calculated, and the equivalent energy cost for this saving is estimated. The outcome provides building practitioners with unique information that can facilitate energy efficient retrofit decision-makings. This is a major departure from offhand calculations that are based on historical cost data of industry best practices. Finally, to improve the reliability of BIM-based energy performance modeling and analysis for existing buildings, a new model-based automated method is presented to map actual thermal resistance measurements at the level of 3D vertexes to the

  15. Parallel Processor for 3D Recovery from Optical Flow

    Directory of Open Access Journals (Sweden)

    Jose Hugo Barron-Zambrano

    2009-01-01

    Full Text Available 3D recovery from motion has received a major effort in computer vision systems in the recent years. The main problem lies in the number of operations and memory accesses to be performed by the majority of the existing techniques when translated to hardware or software implementations. This paper proposes a parallel processor for 3D recovery from optical flow. Its main feature is the maximum reuse of data and the low number of clock cycles to calculate the optical flow, along with the precision with which 3D recovery is achieved. The results of the proposed architecture as well as those from processor synthesis are presented.

  16. Binocular combination in abnormal binocular vision.

    Science.gov (United States)

    Ding, Jian; Klein, Stanley A; Levi, Dennis M

    2013-02-08

    We investigated suprathreshold binocular combination in humans with abnormal binocular visual experience early in life. In the first experiment we presented the two eyes with equal but opposite phase shifted sine waves and measured the perceived phase of the cyclopean sine wave. Normal observers have balanced vision between the two eyes when the two eyes' images have equal contrast (i.e., both eyes contribute equally to the perceived image and perceived phase = 0°). However, in observers with strabismus and/or amblyopia, balanced vision requires a higher contrast image in the nondominant eye (NDE) than the dominant eye (DE). This asymmetry between the two eyes is larger than predicted from the contrast sensitivities or monocular perceived contrast of the two eyes and is dependent on contrast and spatial frequency: more asymmetric with higher contrast and/or spatial frequency. Our results also revealed a surprising NDE-to-DE enhancement in some of our abnormal observers. This enhancement is not evident in normal vision because it is normally masked by interocular suppression. However, in these abnormal observers the NDE-to-DE suppression was weak or absent. In the second experiment, we used the identical stimuli to measure the perceived contrast of a cyclopean grating by matching the binocular combined contrast to a standard contrast presented to the DE. These measures provide strong constraints for model fitting. We found asymmetric interocular interactions in binocular contrast perception, which was dependent on both contrast and spatial frequency in the same way as in phase perception. By introducing asymmetric parameters to the modified Ding-Sperling model including interocular contrast gain enhancement, we succeeded in accounting for both binocular combined phase and contrast simultaneously. Adding binocular contrast gain control to the modified Ding-Sperling model enabled us to predict the results of dichoptic and binocular contrast discrimination experiments

  17. Fault-tolerant 3D Mapping with Application to an Orchard Robot

    DEFF Research Database (Denmark)

    Blas, Morten Rufus; Blanke, Mogens; Rusu, Radu Bogan

    2009-01-01

    In this paper we present a geometric reasoning method for dealing with noise as well as faults present in 3D depth maps. These maps are acquired using stereo-vision sensors, but our framework makes no assumption about the origin of the underlying data. The method is based on observations made on ...... of comprehensive 3D maps for an agricultural robot operating in an orchard....

  18. 3D motion analysis via energy minimization

    Energy Technology Data Exchange (ETDEWEB)

    Wedel, Andreas

    2009-10-16

    This work deals with 3D motion analysis from stereo image sequences for driver assistance systems. It consists of two parts: the estimation of motion from the image data and the segmentation of moving objects in the input images. The content can be summarized with the technical term machine visual kinesthesia, the sensation or perception and cognition of motion. In the first three chapters, the importance of motion information is discussed for driver assistance systems, for machine vision in general, and for the estimation of ego motion. The next two chapters delineate on motion perception, analyzing the apparent movement of pixels in image sequences for both a monocular and binocular camera setup. Then, the obtained motion information is used to segment moving objects in the input video. Thus, one can clearly identify the thread from analyzing the input images to describing the input images by means of stationary and moving objects. Finally, I present possibilities for future applications based on the contents of this thesis. Previous work in each case is presented in the respective chapters. Although the overarching issue of motion estimation from image sequences is related to practice, there is nothing as practical as a good theory (Kurt Lewin). Several problems in computer vision are formulated as intricate energy minimization problems. In this thesis, motion analysis in image sequences is thoroughly investigated, showing that splitting an original complex problem into simplified sub-problems yields improved accuracy, increased robustness, and a clear and accessible approach to state-of-the-art motion estimation techniques. In Chapter 4, optical flow is considered. Optical flow is commonly estimated by minimizing the combined energy, consisting of a data term and a smoothness term. These two parts are decoupled, yielding a novel and iterative approach to optical flow. The derived Refinement Optical Flow framework is a clear and straight-forward approach to

  19. INVIS : Integrated night vision surveillance and observation system

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.; Dijk, J.; Son, R. van

    2010-01-01

    We present the design and first field trial results of the all-day all-weather INVIS Integrated Night Vision surveillance and observation System. The INVIS augments a dynamic three-band false-color nightvision image with synthetic 3D imagery in a real-time display. The night vision sensor suite

  20. Capturing age-related changes in functional contrast sensitivity with decreasing light levels in monocular and binocular vision.

    Science.gov (United States)

    Gillespie-Gallery, Hanna; Konstantakopoulou, Evgenia; Harlow, Jonathan A; Barbur, John L

    2013-09-09

    It is challenging to separate the effects of normal aging of the retina and visual pathways independently from optical factors, decreased retinal illuminance, and early stage disease. This study determined limits to describe the effect of light level on normal, age-related changes in monocular and binocular functional contrast sensitivity. We recruited 95 participants aged 20 to 85 years. Contrast thresholds for correct orientation discrimination of the gap in a Landolt C optotype were measured using a 4-alternative, forced-choice (4AFC) procedure at screen luminances from 34 to 0.12 cd/m(2) at the fovea and parafovea (0° and ±4°). Pupil size was measured continuously. The Health of the Retina index (HRindex) was computed to capture the loss of contrast sensitivity with decreasing light level. Participants were excluded if they exhibited performance outside the normal limits of interocular differences or HRindex values, or signs of ocular disease. Parafoveal contrast thresholds showed a steeper decline and higher correlation with age at the parafovea than the fovea. Of participants with clinical signs of ocular disease, 83% had HRindex values outside the normal limits. Binocular summation of contrast signals declined with age, independent of interocular differences. The HRindex worsens more rapidly with age at the parafovea, consistent with histologic findings of rod loss and its link to age-related degenerative disease of the retina. The HRindex and interocular differences could be used to screen for and separate the earliest stages of subclinical disease from changes caused by normal aging.

  1. Recent advances in the development and transfer of machine vision technologies for space

    Science.gov (United States)

    Defigueiredo, Rui J. P.; Pendleton, Thomas

    1991-01-01

    Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.

  2. Monocular oral reading after treatment of dense congenital unilateral cataract

    Science.gov (United States)

    Birch, Eileen E.; Cheng, Christina; Christina, V; Stager, David R.

    2010-01-01

    Background Good long-term visual acuity outcomes for children with dense congenital unilateral cataracts have been reported following early surgery and good compliance with postoperative amblyopia therapy. However, treated eyes rarely achieve normal visual acuity and there has been no formal evaluation of the utility of the treated eye for reading. Methods Eighteen children previously treated for dense congenital unilateral cataract were tested monocularly with the Gray Oral Reading Test, 4th edition (GORT-4) at 7 to 13 years of age using two passages for each eye, one at grade level and one at +1 above grade level. In addition, right eyes of 55 normal children age 7 to 13 served as a control group. The GORT-4 assesses reading rate, accuracy, fluency, and comprehension. Results Visual acuity of treated eyes ranged from 0.1 to 2.0 logMAR and of fellow eyes from −0.1 to 0.2 logMAR. Treated eyes scored significantly lower than fellow and normal control eyes on all scales at grade level and at +1 above grade level. Monocular reading rate, accuracy, fluency, and comprehension were correlated with visual acuity of treated eyes (rs = −0.575 to −0.875, p < 0.005). Treated eyes with 0.1-0.3 logMAR visual acuity did not differ from fellow or normal control eyes in rate, accuracy, fluency, or comprehension when reading at grade level or at +1 above grade level. Fellow eyes did not differ from normal controls on any reading scale. Conclusions Excellent visual acuity outcomes following treatment of dense congenital unilateral cataracts are associated with normal reading ability of the treated eye in school-age children. PMID:20603057

  3. A Novel Metric Online Monocular SLAM Approach for Indoor Applications

    Directory of Open Access Journals (Sweden)

    Yongfei Li

    2016-01-01

    Full Text Available Monocular SLAM has attracted more attention recently due to its flexibility and being economic. In this paper, a novel metric online direct monocular SLAM approach is proposed, which can obtain the metric reconstruction of the scene. In the proposed approach, a chessboard is utilized to provide initial depth map and scale correction information during the SLAM process. The involved chessboard provides the absolute scale of scene, and it is seen as a bridge between the camera visual coordinate and the world coordinate. The scene is reconstructed as a series of key frames with their poses and correlative semidense depth maps, using a highly accurate pose estimation achieved by direct grid point-based alignment. The estimated pose is coupled with depth map estimation calculated by filtering over a large number of pixelwise small-baseline stereo comparisons. In addition, this paper formulates the scale-drift model among key frames and the calibration chessboard is used to correct the accumulated pose error. At the end of this paper, several indoor experiments are conducted. The results suggest that the proposed approach is able to achieve higher reconstruction accuracy when compared with the traditional LSD-SLAM approach. And the approach can also run in real time on a commonly used computer.

  4. Parallel Tracking and Mapping for Controlling VTOL Airframe

    Directory of Open Access Journals (Sweden)

    Michal Jama

    2011-01-01

    Full Text Available This work presents a vision based system for navigation on a vertical takeoff and landing unmanned aerial vehicle (UAV. This is a monocular vision based, simultaneous localization and mapping (SLAM system, which measures the position and orientation of the camera and builds a map of the environment using a video stream from a single camera. This is different from past SLAM solutions on UAV which use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. Solution presented in this paper extends and significantly modifies a recent open-source algorithm that solves SLAM problem using approach fundamentally different from a traditional approach. Proposed modifications provide the position measurements necessary for the navigation solution on a UAV. The main contributions of this work include: (1 extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; (2 improved performance of the SLAM algorithm for lower camera frame rates; and (3 the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible.

  5. Multi-view and 3D deformable part models.

    Science.gov (United States)

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ).

  6. 3D Graphics with Spreadsheets

    Directory of Open Access Journals (Sweden)

    Jan Benacka

    2009-06-01

    Full Text Available In the article, the formulas for orthographic parallel projection of 3D bodies on computer screen are derived using secondary school vector algebra. The spreadsheet implementation is demonstrated in six applications that project bodies with increasing intricacy – a convex body (cube with non-solved visibility, convex bodies (cube, chapel with solved visibility, a coloured convex body (chapel with solved visibility, and a coloured non-convex body (church with solved visibility. The projections are revolvable in horizontal and vertical plane, and they are changeable in size. The examples show an unusual way of using spreadsheets as a 3D computer graphics tool. The applications can serve as a simple introduction to the general principles of computer graphics, to the graphics with spreadsheets, and as a tool for exercising stereoscopic vision. The presented approach is usable at visualising 3D scenes within some topics of secondary school curricula as solid geometry (angles and distances of lines and planes within simple bodies or analytic geometry in space (angles and distances of lines and planes in E3, and even at university level within calculus at visualising graphs of z = f(x,y functions. Examples are pictured.

  7. Pediatric interventional radiology with 3D rotational angiography

    International Nuclear Information System (INIS)

    Racadio, J.M.

    2004-01-01

    Rotational angiography with three-dimensional reconstruction vastly improves spatial orientation, eliminating guesswork during interventions. The 3D images help to define the anatomy more accurately, particularly in the case of overlapping tortuous anatomy such as that encountered in genitourinary abnormalities. The procedures are performed on a Philips Integris Allura biplane system with two 12'' image intensifiers. Although radiologists are trained to assemble multiple oblique views in their minds, that vision is often hard to convey to a waiting surgeon. The 3D images give a much better impression of the spatial relationships, saving valuable time and giving added security. (orig.)

  8. Research on three-dimensional reconstruction method based on binocular vision

    Science.gov (United States)

    Li, Jinlin; Wang, Zhihui; Wang, Minjun

    2018-03-01

    As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.

  9. New weather depiction technology for night vision goggle (NVG) training: 3D virtual/augmented reality scene-weather-atmosphere-target simulation

    Science.gov (United States)

    Folaron, Michelle; Deacutis, Martin; Hegarty, Jennifer; Vollmerhausen, Richard; Schroeder, John; Colby, Frank P.

    2007-04-01

    US Navy and Marine Corps pilots receive Night Vision Goggle (NVG) training as part of their overall training to maintain the superiority of our forces. This training must incorporate realistic targets; backgrounds; and representative atmospheric and weather effects they may encounter under operational conditions. An approach for pilot NVG training is to use the Night Imaging and Threat Evaluation Laboratory (NITE Lab) concept. The NITE Labs utilize a 10' by 10' static terrain model equipped with both natural and cultural lighting that are used to demonstrate various illumination conditions, and visual phenomena which might be experienced when utilizing night vision goggles. With this technology, the military can safely, systematically, and reliably expose pilots to the large number of potentially dangerous environmental conditions that will be experienced in their NVG training flights. A previous SPIE presentation described our work for NAVAIR to add realistic atmospheric and weather effects to the NVG NITE Lab training facility using the NVG - WDT(Weather Depiction Technology) system (Colby, et al.). NVG -WDT consist of a high end multiprocessor server with weather simulation software, and several fixed and goggle mounted Heads Up Displays (HUDs). Atmospheric and weather effects are simulated using state-of-the-art computer codes such as the WRF (Weather Research μ Forecasting) model; and the US Air Force Research Laboratory MODTRAN radiative transport model. Imagery for a variety of natural and man-made obscurations (e.g. rain, clouds, snow, dust, smoke, chemical releases) are being calculated and injected into the scene observed through the NVG via the fixed and goggle mounted HUDs. This paper expands on the work described in the previous presentation and will describe the 3D Virtual/Augmented Reality Scene - Weather - Atmosphere - Target Simulation part of the NVG - WDT. The 3D virtual reality software is a complete simulation system to generate realistic

  10. Formalizing the potential of stereoscopic 3D user experience in interactive entertainment

    Science.gov (United States)

    Schild, Jonas; Masuch, Maic

    2015-03-01

    The use of stereoscopic 3D vision affects how interactive entertainment has to be developed as well as how it is experienced by the audience. The large amount of possibly impacting factors and variety as well as a certain subtlety of measured effects on user experience make it difficult to grasp the overall potential of using S3D vision. In a comprehensive approach, we (a) present a development framework which summarizes possible variables in display technology, content creation and human factors, and (b) list a scheme of S3D user experience effects concerning initial fascination, emotions, performance, and behavior as well as negative feelings of discomfort and complexity. As a major contribution we propose a qualitative formalization which derives dependencies between development factors and user effects. The argumentation is based on several previously published user studies. We further show how to apply this formula to identify possible opportunities and threats in content creation as well as how to pursue future steps for a possible quantification.

  11. What is Stereopsis?

    Directory of Open Access Journals (Sweden)

    D Vishwanath

    2012-07-01

    Full Text Available “Stereopsis” refers to the characteristically vivid qualitative impression of 3D structure that is observed when real (or simulated-3D scenes are viewed binocularly. Stereopsis is associated with a compelling perception of solidity or 3-dimensionality, a clear sense of space between objects, and a phenomenal sense of realism. These visual characteristics are conventionally thought to be a result of the different views of an object afforded by binocular vision (disparity or self-motion (motion parallax. However, such visual characteristics can also be obtained under controlled monocular viewing of pictures. One explanation for the impression of monocular stereopsis is based on the notion of cue-coherence/conflict (eg, Ames, 1925. When a picture is viewed with both eyes, binocular cues specify the flat picture surface and are in conflict with the 3-dimentionality implied by the pictorial cues. The elimination of these conflicting cues under monocular viewing putatively causes the enhancement of pictorial depth impression. The cue-coherence/conflict explanation also predicts a greater magnitude of perceived depth relief accompanying the greater impression of stereopsis. I will present an alternative theory that stereopsis is the conscious perception of the precision of the brains estimate of absolute (egocentrically scaled depth. Both qualitative and quantitative empirical results are consistent with this theory. Specifically, they show that (i the same qualitative characteristics of depth impression are reported under binocular viewing of real scenes, stereoscopic images, and controlled monocular viewing of pictures; (ii the impression of stereopsis is measurable and its variation, under different viewing conditions is not consistent with a cue-conflict account; (iii stereopsis can be elicited by manipulating egocentric distance cues when viewing pictures, without altering conflicting binocular cues; and (iv under conditions that elicit

  12. Flight-appropriate 3D Terrain-rendering Toolkit for Synthetic Vision, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — TerraMetrics proposes an SBIR Phase I R/R&D effort to develop a key 3D terrain-rendering technology that provides the basis for successful commercial deployment...

  13. Rapid matching of stereo vision based on fringe projection profilometry

    Science.gov (United States)

    Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei

    2016-09-01

    As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.

  14. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    Science.gov (United States)

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  15. Color vision loss in patients treated with chloroquine

    Directory of Open Access Journals (Sweden)

    Ventura Dora F.

    2003-01-01

    Full Text Available Patients that make use of chloroquine or hydroxychloroquine, drugs which are frequently administered for treatment of rheumatoid arthritis, lupus erithromatosus or malaria, may suffer alterations in color vision and in contrast sensitivity. The present work evaluates the visual function of these patients in a joint study of the University of São Paulo (USP, in São Paulo, and of the Federal University of Pará (UFPA, in Belém. Thirty two chloroquine user patients without alterations in the eye fundus exam were evaluated in São Paulo (n=10; aged 38 to 71 years; mean=55,8 years and in Belém (n=22; aged 20 to 67; mean=40 years. The prescribed accumulated chloroquine dose was 45 to 430 g (mean=213 g; sd = 152 g for the São Paulo group, and 36 to 540 g (mean=174 g; sd=183 g for the Belém group. Tests were performed monocularly with corrected eye refractive state. Color discrimination was evaluated using the Cambridge Colour Test (CCT: the color discrimination threshold was measured first in the protan, deutan and tritan axes and, in succession, three MacAdam's ellipses were determined. The patient's color vision was also evaluated with color arrangement tests: the Farnsworth-Munsell 100 Hue (FM100, the Farnsworth-Munsell D15, and the Lanthony Desaturated (D15d tests. We also measured the contrast sensitivity for black-and-white sine wave grating of twenty two patients. The results were compared with controls without ophthalmologic or neuro-ophthalmologic pathologies. Twenty four patients presented acquired dyschromatopsia. There were cases of selective loss (11 patients and of diffuse loss (13 patients. Although losses were present in the FM100 there was no correlation between the FM100 error score and the ellipse area measured by the CCT. Moreover, three patients that scored normal in the FM100, failed to reach normal threshold in the CCT. The Lanthony test was less sensitive than the other two tests, since it failed to indicate loss in about

  16. Cephalopod vision involves dicarboxylic amino acids: D-aspartate, L-aspartate and L-glutamate.

    Science.gov (United States)

    D'Aniello, Salvatore; Spinelli, Patrizia; Ferrandino, Gabriele; Peterson, Kevin; Tsesarskia, Mara; Fisher, George; D'Aniello, Antimo

    2005-03-01

    In the present study, we report the finding of high concentrations of D-Asp (D-aspartate) in the retina of the cephalopods Sepia officinalis, Loligo vulgaris and Octopus vulgaris. D-Asp increases in concentration in the retina and optic lobes as the animal develops. In neonatal S. officinalis, the concentration of D-Asp in the retina is 1.8+/-0.2 micromol/g of tissue, and in the optic lobes it is 5.5+/-0.4 micromol/g of tissue. In adult animals, D-Asp is found at a concentration of 3.5+/-0.4 micromol/g in retina and 16.2+/-1.5 micromol/g in optic lobes (1.9-fold increased in the retina, and 2.9-fold increased in the optic lobes). In the retina and optic lobes of S. officinalis, the concentration of D-Asp, L-Asp (L-aspartate) and L-Glu (L-glutamate) is significantly influenced by the light/dark environment. In adult animals left in the dark, these three amino acids fall significantly in concentration in both retina (approx. 25% less) and optic lobes (approx. 20% less) compared with the control animals (animals left in a diurnal/nocturnal physiological cycle). The reduction in concentration is in all cases statistically significant (P=0.01-0.05). Experiments conducted in S. officinalis by using D-[2,3-3H]Asp have shown that D-Asp is synthesized in the optic lobes and is then transported actively into the retina. D-aspartate racemase, an enzyme which converts L-Asp into D-Asp, is also present in these tissues, and it is significantly decreased in concentration in animals left for 5 days in the dark compared with control animals. Our hypothesis is that the dicarboxylic amino acids, D-Asp, L-Asp and L-Glu, play important roles in vision.

  17. Lateralized visual behavior in bottlenose dolphins (Tursiops truncatus) performing audio-visual tasks: the right visual field advantage.

    Science.gov (United States)

    Delfour, F; Marten, K

    2006-01-10

    Analyzing cerebral asymmetries in various species helps in understanding brain organization. The left and right sides of the brain (lateralization) are involved in different cognitive and sensory functions. This study focuses on dolphin visual lateralization as expressed by spontaneous eye preference when performing a complex cognitive task; we examine lateralization when processing different visual stimuli displayed on an underwater touch-screen (two-dimensional figures, three-dimensional figures and dolphin/human video sequences). Three female bottlenose dolphins (Tursiops truncatus) were submitted to a 2-, 3- or 4-, choice visual/auditory discrimination problem, without any food reward: the subjects had to correctly match visual and acoustic stimuli together. In order to visualize and to touch the underwater target, the dolphins had to come close to the touch-screen and to position themselves using monocular vision (left or right eye) and/or binocular naso-ventral vision. The results showed an ability to associate simple visual forms and auditory information using an underwater touch-screen. Moreover, the subjects showed a spontaneous tendency to use monocular vision. Contrary to previous findings, our results did not clearly demonstrate right eye preference in spontaneous choice. However, the individuals' scores of correct answers were correlated with right eye vision, demonstrating the advantage of this visual field in visual information processing and suggesting a left hemispheric dominance. We also demonstrated that the nature of the presented visual stimulus does not seem to have any influence on the animals' monocular vision choice.

  18. The Enright phenomenon. Stereoscopic distortion of perceived driving speed induced by monocular pupil dilation.

    Science.gov (United States)

    Carkeet, Andrew; Wood, Joanne M; McNeill, Kylie M; McNeill, Hamish J; James, Joanna A; Holder, Leigh S

    The Enright phenomenon describes the distortion in speed perception experienced by an observer looking sideways from a moving vehicle when viewing with interocular differences in retinal image brightness, usually induced by neutral density filters. We investigated whether the Enright phenomenon could be induced with monocular pupil dilation using tropicamide. We tested 17 visually normal young adults on a closed road driving circuit. Participants were asked to travel at Goal Speeds of 40km/h and 60km/h while looking sideways from the vehicle with: (i) both eyes with undilated pupils; (ii) both eyes with dilated pupils; (iii) with the leading eye only dilated; and (iv) the trailing eye only dilated. For each condition we recorded actual driving speed. With the pupil of the leading eye dilated participants drove significantly faster (by an average of 3.8km/h) than with both eyes dilated (p=0.02); with the trailing eye dilated participants drove significantly slower (by an average of 3.2km/h) than with both eyes dilated (p<0.001). The speed, with the leading eye dilated, was faster by an average of 7km/h than with the trailing eye dilated (p<0.001). There was no significant difference between driving speeds when viewing with both eyes either dilated or undilated (p=0.322). Our results are the first to show a measurable change in driving behaviour following monocular pupil dilation and support predictions based on the Enright phenomenon. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  19. Enhanced 3D face processing using an active vision system

    DEFF Research Database (Denmark)

    Lidegaard, Morten; Larsen, Rasmus; Kraft, Dirk

    2014-01-01

    We present an active face processing system based on 3D shape information extracted by means of stereo information. We use two sets of stereo cameras with different field of views (FOV): One with a wide FOV is used for face tracking, while the other with a narrow FOV is used for face identification...

  20. Robot Vision to Monitor Structures in Invisible Fog Environments Using Active Imaging Technology

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seungkyu; Park, Nakkyu; Baik, Sunghoon; Choi, Youngsoo; Jeong, Kyungmin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Active vision is a direct visualization technique using a highly sensitive image sensor and a high intensity illuminant. Range-gated imaging (RGI) technique providing 2D and 3D images is one of emerging active vision technologies. The RGI technique extracts vision information by summing time sliced vision images. In the RGI system, objects are illuminated for ultra-short time by a high intensity illuminant and then the light reflected from objects is captured by a highly sensitive image sensor with the exposure of ultra-short time. The RGI system provides 2D and 3D image data from several images and it moreover provides clear images from invisible fog and smoke environment by using summing of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays, more and more applicable by virtue of the rapid development of optical and sensor technologies, such as highly sensitive imaging sensor and ultra-short pulse laser light. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been demonstrated 3D imaging based on range-gated imaging. In this paper, a robot system to monitor structures in invisible fog environment is developed using an active range-gated imaging technique. The system consists of an ultra-short pulse laser device and a highly sensitive imaging sensor. The developed vision system is carried out to monitor objects in invisible fog environment. The experimental result of this newly approach vision system is described in this paper. To see invisible objects in fog

  1. Robot Vision to Monitor Structures in Invisible Fog Environments Using Active Imaging Technology

    International Nuclear Information System (INIS)

    Park, Seungkyu; Park, Nakkyu; Baik, Sunghoon; Choi, Youngsoo; Jeong, Kyungmin

    2014-01-01

    Active vision is a direct visualization technique using a highly sensitive image sensor and a high intensity illuminant. Range-gated imaging (RGI) technique providing 2D and 3D images is one of emerging active vision technologies. The RGI technique extracts vision information by summing time sliced vision images. In the RGI system, objects are illuminated for ultra-short time by a high intensity illuminant and then the light reflected from objects is captured by a highly sensitive image sensor with the exposure of ultra-short time. The RGI system provides 2D and 3D image data from several images and it moreover provides clear images from invisible fog and smoke environment by using summing of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays, more and more applicable by virtue of the rapid development of optical and sensor technologies, such as highly sensitive imaging sensor and ultra-short pulse laser light. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been demonstrated 3D imaging based on range-gated imaging. In this paper, a robot system to monitor structures in invisible fog environment is developed using an active range-gated imaging technique. The system consists of an ultra-short pulse laser device and a highly sensitive imaging sensor. The developed vision system is carried out to monitor objects in invisible fog environment. The experimental result of this newly approach vision system is described in this paper. To see invisible objects in fog

  2. Three-dimensional measurement of small inner surface profiles using feature-based 3-D panoramic registration

    Science.gov (United States)

    Gong, Yuanzheng; Seibel, Eric J.

    2017-01-01

    Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection.

  3. Advances in embedded computer vision

    CERN Document Server

    Kisacanin, Branislav

    2014-01-01

    This illuminating collection offers a fresh look at the very latest advances in the field of embedded computer vision. Emerging areas covered by this comprehensive text/reference include the embedded realization of 3D vision technologies for a variety of applications, such as stereo cameras on mobile devices. Recent trends towards the development of small unmanned aerial vehicles (UAVs) with embedded image and video processing algorithms are also examined. The authoritative insights range from historical perspectives to future developments, reviewing embedded implementation, tools, technolog

  4. Localisation accuracy of semi-dense monocular SLAM

    Science.gov (United States)

    Schreve, Kristiaan; du Plessies, Pieter G.; Rätsch, Matthias

    2017-06-01

    Understanding the factors that influence the accuracy of visual SLAM algorithms is very important for the future development of these algorithms. So far very few studies have done this. In this paper, a simulation model is presented and used to investigate the effect of the number of scene points tracked, the effect of the baseline length in triangulation and the influence of image point location uncertainty. It is shown that the latter is very critical, while the other all play important roles. Experiments with a well known semi-dense visual SLAM approach are also presented, when used in a monocular visual odometry mode. The experiments shows that not including sensor bias and scale factor uncertainty is very detrimental to the accuracy of the simulation results.

  5. Smartphone Image Acquisition During Postmortem Monocular Indirect Ophthalmoscopy.

    Science.gov (United States)

    Lantz, Patrick E; Schoppe, Candace H; Thibault, Kirk L; Porter, William T

    2016-01-01

    The medical usefulness of smartphones continues to evolve as third-party applications exploit and expand on the smartphones' interface and capabilities. This technical report describes smartphone still-image capture techniques and video-sequence recording capabilities during postmortem monocular indirect ophthalmoscopy. Using these devices and techniques, practitioners can create photographic documentation of fundal findings, clinically and at autopsy, without the expense of a retinal camera. Smartphone image acquisition of fundal abnormalities can promote ophthalmological telemedicine--especially in regions or countries with limited resources--and facilitate prompt, accurate, and unbiased documentation of retinal hemorrhages in infants and young children. © 2015 American Academy of Forensic Sciences.

  6. SYSTEME MULTISENSEUR DE PERCEPTION 3D POUR LE ROBOT MOBILE HILARE

    OpenAIRE

    Ferrer , Michel

    1982-01-01

    L'ETUDE PRESENTEE S'INSERE DANS LE VASTE DOMAINE DE LA VISION ARTIFICIELLE. ELLE CONCERNE PLUS PARTICULIEREMENT L'INTEGRATION DU SYSTEME DE PERCEPTION TROIS DIMENSIONS (3D) DU ROBOT MOBILE AUTONOME HILARE. CE SYSTEME EST COMPOSE D'UNE CAMERA MATRICIELLE A SEMICONDUCTEURS, D'UN TELEMETRE LASER ET D'UNE STRUCTURE MECANIQUE ASSURANT LA DEFLEXION DU FAISCEAU LASER. DANS CE MEMOIRE SONT DECRITS: LA CONCEPTION DE LA STRUCTURE DEFLECTRICE; LE LOGICIEL DE TRAITEMENT DES IMAGES VIDEO MULTINIVEAUX BASE...

  7. The Relationship Between Fusion, Suppression, and Diplopia in Normal and Amblyopic Vision.

    Science.gov (United States)

    Spiegel, Daniel P; Baldwin, Alex S; Hess, Robert F

    2016-10-01

    Single vision occurs through a combination of fusion and suppression. When neither mechanism takes place, we experience diplopia. Under normal viewing conditions, the perceptual state depends on the spatial scale and interocular disparity. The purpose of this study was to examine the three perceptual states in human participants with normal and amblyopic vision. Participants viewed two dichoptically separated horizontal blurred edges with an opposite tilt (2.35°) and indicated their binocular percept: "one flat edge," "one tilted edge," or "two edges." The edges varied with scale (fine 4 min arc and coarse 32 min arc), disparity, and interocular contrast. We investigated how the binocular interactions vary in amblyopic (visual acuity [VA] > 0.2 logMAR, n = 4) and normal vision (VA ≤ 0 logMAR, n = 4) under interocular variations in stimulus contrast and luminance. In amblyopia, despite the established sensory dominance of the fellow eye, fusion prevails at the coarse scale and small disparities (75%). We also show that increasing the relative contrast to the amblyopic eye enhances the probability of fusion at the fine scale (from 18% to 38%), and leads to a reversal of the sensory dominance at coarse scale. In normal vision we found that interocular luminance imbalances disturbed binocular combination only at the fine scale in a way similar to that seen in amblyopia. Our results build upon the growing evidence that the amblyopic visual system is binocular and further show that the suppressive mechanisms rendering the amblyopic system functionally monocular are scale dependent.

  8. Clinical Evaluation of Functional Vision of +1.5 Diopters near Addition, Aspheric, Rotational Asymmetric Multifocal Intraocular Lens.

    Science.gov (United States)

    Kretz, Florian Tobias Alwin; Khoramnia, Rahmin; Attia, Mary Safwat; Koss, Michael Janusz; Linz, Katharina; Auffarth, Gerd Uwe

    2016-10-01

    To evaluate postoperative outcomes and visual performance in intermediate distance after implantation of a +1.5 diopters (D) addition, aspheric, rotational asymmetric multifocal intraocular lens (MIOL). Patients underwent bilateral cataract surgery with implantation of an aspheric, asymmetric MIOL with +1.5 D near addition. A complete ophthalmological examination was performed preoperatively and 3 months postoperatively. The main outcome measures were monocular and binocular uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), uncorrected intermediate visual acuity (UIVA), distance corrected intermediate visual acuity (DCIVA), uncorrected near visual acuity (UNVA) and distance corrected keratometry, and manifest refraction. The Salzburg Reading Desk was used to analyze unilateral and bilateral functional vision with uncorrected and corrected reading acuity, reading distance, reading speed, and the smallest log-scaled print size that could be read effectively at near and intermediate distances. The study comprised 60 eyes of 30 patients (mean age, 68.30 ± 9.26 years; range, 34 to 80 years). There was significant improvement in UDVA and CDVA. Mean UIVA was 0.01 ± 0.09 logarithm of the minimum angle of resolution (logMAR) and mean DCIVA was -0.02 ± 0.11 logMAR. In Salzburg Reading Desk analysis for UIVA, the mean subjective intermediate distance was 67.58 ± 8.59 cm with mean UIVA of -0.02 ± 0.09 logMAR and mean word count of 96.38 ± 28.32 words/min. The new aspheric, asymmetric, +1.5 D near addition MIOL offers good results for distance visual function in combination with good performance for intermediate distances and functional results for near distance.

  9. Vision-Based Interest Point Extraction Evaluation in Multiple Environments

    National Research Council Canada - National Science Library

    McKeehan, Zachary D

    2008-01-01

    Computer-based vision is becoming a primary sensor mechanism in many facets of real world 2-D and 3-D applications, including autonomous robotics, augmented reality, object recognition, motion tracking, and biometrics...

  10. Real-Time 3D Visualization

    Science.gov (United States)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  11. 3D optical measuring technologies for dimensional inspection

    International Nuclear Information System (INIS)

    Chugui, Yu V

    2005-01-01

    The results of the R and D activity of TDI SIE SB RAS in the field of the 3D optical measuring technologies and systems for noncontact 3D optical dimensional inspection applied to atomic and railway industry safety problems are presented. This activity includes investigations of diffraction phenomena on some 3D objects, using the original constructive calculation method, development of hole inspection method on the base of diffractive optical elements. Ensuring the safety of nuclear reactors and running trains as well as their high exploitation reliability takes a noncontact inspection of geometrical parameters of their components. For this tasks we have developed methods and produced the technical vision measuring systems LMM, CONTROL, PROFILE, and technologies for non-contact 3D dimensional inspection of grid spacers and fuel elements for the nuclear reactor VVER-1000 and VVER-440, as well as automatic laser diagnostic system COMPLEX for noncontact inspection of geometrical parameters of running freight car wheel pairs. The performances of these systems and the results of the industrial testing at atomic and railway companies are presented

  12. A comparison of the sensitivity of EQ-5D, SF-6D and TTO utility values to changes in vision and perceived visual function in patients with primary open-angle glaucoma

    Directory of Open Access Journals (Sweden)

    Bozzani Fiammetta Maria

    2012-08-01

    Full Text Available Abstract Background Economic viability of treatments for primary open-angle glaucoma (POAG should be assessed objectively to prioritise health care interventions. This study aims to identify the methods for eliciting utility values (UVs most sensitive to differences in visual field and visual functioning in patients with POAG. As a secondary objective, the dimensions of generic health-related and vision-related quality of life most affected by progressive vision loss will be identified. Methods A total of 132 POAG patients were recruited. Three sets of utility values (EuroQoL EQ-5D, Short Form SF-6D, Time Trade Off and a measure of perceived visual functioning from the National Eye Institute Visual Function Questionnaire (VFQ-25 were elicited during face-to-face interviews. The sensitivity of UVs to differences in the binocular visual field, visual acuity and visual functioning measures was analysed using non-parametric statistical methods. Results Median utilities were similar across Integrated Visual Field score quartiles for EQ-5D (P = 0.08 whereas SF-6D and Time-Trade-Off UVs significantly decreased (p = 0.01 and p = 0.001, respectively. The VFQ-25 score varied across Integrated Visual Field and binocular visual acuity groups and was associated with all three UVs (P ≤ 0.001; most of its vision-specific sub-scales were associated with the vision markers. The most affected dimension was driving. A relationship with vision markers was found for the physical component of SF-36 and not for any dimension of EQ-5D. Conclusions The Time-Trade-Off was more sensitive than EQ-5D and SF-6D to changes in vision and visual functioning associated with glaucoma progression but could not measure quality of life changes in the mildest disease stages.

  13. Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter.

    Science.gov (United States)

    Alatise, Mary B; Hancke, Gerhard P

    2017-09-21

    Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs).

  14. Fisiologia da visão binocular Physiology of binocular vision

    Directory of Open Access Journals (Sweden)

    Harley E. A. Bicas

    2004-02-01

    Full Text Available A visão binocular de seres humanos resulta da superposição quase completa dos campos visuais de cada olho, o que suscita discriminação perceptual de localizações espaciais de objetos relativamente ao observador (localização egocêntrica bem mais fina (estereopsia, mas isso ocorre em, apenas, uma faixa muito estreita (o horóptero. Aquém e além dela, acham-se presentes diplopia e confusão, sendo necessária supressão fisiológica (cortical para evitá-las. Analisa-se a geometria do horóptero e suas implicações fisiológicas (o desvio de Hillebrand, a partição de Kundt, a área de Panum, assim como aspectos clínicos da visão binocular normal (percepção simultânea, fusão, visão estereoscópica e de adaptações a seus estados afetados (supressão patológica, ambliopia, correspondência visual anômala.The binocular vision of human beings is given by the almost complete superimposition of the monocular visual fields, which allows a finer perceptual discrimination of the egocentric localization of objects in space (stereopsis but only within a very narrow band (the horopter. Before and beyond it, diplopia and confusion are present, so that a physiologic (cortical suppression is necessary to avoid them to become conscious. The geometry of the horopter and its physiologic implications (Hillebrand's deviation, Kundt's partition, Panum's area, stereoscopic vision are analyzed, as well as some clinical aspects of the normal binocular vision (simultaneous perception, fusion, stereoscopic vision and of adaptations to abnormal states (pathologic suppression, amblyopia, abnormal retinal correspondence.

  15. EAST-AIA deployment under vacuum: Calibration of laser diagnostic system using computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Cheng, Yong; Feng, Hansheng; Wu, Zhenwei; Li, Yingying; Sun, Yongjun; Zheng, Lei [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Bruno, Vincent; Eric, Villedieu [CEA-IRFM, F-13108 Saint-Paul-Lez-Durance (France)

    2016-11-15

    Highlights: • The first deployment of the EAST articulated inspection arm robot under vacuum is presented. • A computer vision based approach to measure the laser spot displacement is proposed. • An experiment on the real EAST tokamak is performed to validate the proposed measure approach, and the results shows that the measurement accuracy satisfies the requirement. - Abstract: For the operation of EAST tokamak, it is crucial to ensure that all the diagnostic systems are in the good condition in order to reflect the plasma status properly. However, most of the diagnostic systems are mounted inside the tokamak vacuum vessel, which makes them extremely difficult to maintain under high vacuum condition during the tokamak operation. Thanks to a system called EAST articulated inspection arm robot (EAST-AIA), the examination of these in-vessel diagnostic systems can be performed by an embedded camera carried by the robot. In this paper, a computer vision algorithm has been developed to calibrate a laser diagnostic system with the help of a monocular camera at the robot end. In order to estimate the displacement of the laser diagnostic system with respect to the vacuum vessel, several visual markers were attached to the inner wall. This experiment was conducted both on the EAST vacuum vessel mock-up and the real EAST tokamak under vacuum condition. As a result, the accuracy of the displacement measurement was within 3 mm under the current camera resolution, which satisfied the laser diagnostic system calibration.

  16. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Hyungjin Kim

    2015-08-01

    Full Text Available Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments

  17. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    Science.gov (United States)

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-08-31

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.

  18. PHOTOGRAMMETRIC 3D BUILDING RECONSTRUCTION FROM THERMAL IMAGES

    Directory of Open Access Journals (Sweden)

    E. Maset

    2017-08-01

    Full Text Available This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.

  19. MonoSLAM: real-time single camera SLAM.

    Science.gov (United States)

    Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier

    2007-06-01

    We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.

  20. Visual servo control for a human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-03-01

    Full Text Available This thesis presents work completed on the design of control and vision components for use in a monocular vision-based human-following robot. The use of vision in a controller feedback loop is referred to as vision-based or visual servo control...

  1. A Case of Recurrent Transient Monocular Visual Loss after Receiving Sildenafil

    Directory of Open Access Journals (Sweden)

    Asaad Ghanem Ghanem

    2011-01-01

    Full Text Available A 53-year-old man was attended to the Clinic Ophthalmic Center, Mansoura University, Egypt, with recurrent transient monocular visual loss after receiving sildenafil citrate (Viagra for erectile dysfunction. Examination for possible risk factors revealed mild hypercholesterolemia. Family history showed that his father had suffered from bilateral nonarteritic anterior ischemic optic neuropathy (NAION. Physicians might look for arteriosclerotic risk factors and family history of NAION among predisposing risk factors before prescribing sildenafil erectile dysfunction drugs.

  2. Distance Estimation by Fusing Radar and Monocular Camera with Kalman Filter

    OpenAIRE

    Feng, Yuxiang; Pickering, Simon; Chappell, Edward; Iravani, Pejman; Brace, Christian

    2017-01-01

    The major contribution of this paper is to propose a low-cost accurate distance estimation approach. It can potentially be used in driver modelling, accident avoidance and autonomous driving. Based on MATLAB and Python, sensory data from a Continental radar and a monocular dashcam were fused using a Kalman filter. Both sensors were mounted on a Volkswagen Sharan, performing repeated driving on a same route. The established system consists of three components, radar data processing, camera dat...

  3. VITOM 3D: Preliminary Experience in Cranial Surgery.

    Science.gov (United States)

    Rossini, Zefferino; Cardia, Andrea; Milani, Davide; Lasio, Giovanni Battista; Fornari, Maurizio; D'Angelo, Vincenzo

    2017-11-01

    Optimal vision and ergonomics are important factors contributing to achievement of good results during neurosurgical interventions. The operating microscope and the endoscope have partially filled the gap between the need for good surgical vision and maintenance of a comfortable posture during surgery. Recently, a new technology called video-assisted telescope operating monitor or exoscope has been used in cranial surgery. The main drawback with previous prototypes was lack of stereopsis. We present the first case report of cranial surgery performed using the VITOM 3D, an exoscope conjugating 4K resolution view and three-dimensional technology, and discuss advantages and disadvantages compared with the operating microscope. A 50-year-old patient with vertigo and headache linked to a petrous ridge meningioma underwent surgery using the VITOM 3D. Complete removal of the tumor and resolution of symptoms were achieved. The telescope was maintained over the surgical field for the duration of the procedure; a video monitor was placed at 2 m from the surgeons; and a control unit allowed focusing, magnification, and repositioning of the camera. VITOM 3D is a video system that has overcome the lack of stereopsis, a major drawback of previous exoscope models. It has many advantages regarding ergonomics, versatility, and depth of field compared with the operating microscope, but the holder arm and the mechanism of repositioning, refocusing, and magnification need to be ameliorated. Surgeons should continue to use the technology they feel confident with, unless a distinct advantage with newer technologies can be demonstrated. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Vision-Based 3D Motion Estimation for On-Orbit Proximity Satellite Tracking and Navigation

    Science.gov (United States)

    2015-06-01

    development, computer rendered 3D videos were created in order to test and debug the algorithm. Computer rendered videos allow full control of all the...printed using the Fortus 400mc 3D rapid- prototyping printer of the NPS Space Systems Academic Group, while the internal structure is made of aluminum...CC.ImageSize(1)); Y=[Y,y]; X=[X,x]; end B. MATLAB RIGID CLOUD Below is provided the code used to create a 3D rigid cloud of points rotating and

  5. The use of contact lens telescopic systems in low vision rehabilitation.

    Science.gov (United States)

    Vincent, Stephen J

    2017-06-01

    Refracting telescopes are afocal compound optical systems consisting of two lenses that produce an apparent magnification of the retinal image. They are routinely used in visual rehabilitation in the form of monocular or binocular hand held low vision aids, and head or spectacle-mounted devices to improve distance visual acuity, and with slight modifications, to enhance acuity for near and intermediate tasks. Since the advent of ground glass haptic lenses in the 1930's, contact lenses have been employed as a useful refracting element of telescopic systems; primarily as a mobile ocular lens (the eyepiece), that moves with the eye. Telescopes which incorporate a contact lens eyepiece significantly improve the weight, comesis, and field of view compared to traditional spectacle-mounted telescopes, in addition to potential related psycho-social benefits. This review summarises the underlying optics and use of contact lenses to provide telescopic magnification from the era of Descartes, to Dallos, and the present day. The limitations and clinical challenges associated with such devices are discussed, along with the potential future use of reflecting telescopes incorporated within scleral lenses and tactile contact lens systems in low vision rehabilitation. Copyright © 2017 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  6. Depth of Monocular Elements in a Binocular Scene: The Conditions for da Vinci Stereopsis

    Science.gov (United States)

    Cook, Michael; Gillam, Barbara

    2004-01-01

    Quantitative depth based on binocular resolution of visibility constraints is demonstrated in a novel stereogram representing an object, visible to 1 eye only, and seen through an aperture or camouflaged against a background. The monocular region in the display is attached to the binocular region, so that the stereogram represents an object which…

  7. A Case of Complete Recovery of Fluctuating Monocular Blindness Following Endovascular Treatment in Internal Carotid Artery Dissection.

    Science.gov (United States)

    Kim, Ki-Tae; Baik, Seung Guk; Park, Kyung-Pil; Park, Min-Gyu

    2015-09-01

    Monocular blindness may appear as the first symptom of internal carotid artery dissection (ICAD). However, there have been no reports that monocular visual loss repeatedly occurs and disappears in response to postural change in ICAD. A 33-year-old woman presented with transient monocular blindness (TMB) following acute-onset headache. TMB repeatedly occurred in response to postural change. Two days later, she experienced transient dysarthria and right hemiparesis in upright position. Pupil size and light reflex were normal, but a relative afferent pupillary defect was positive in the left eye. Diffusion-weighted imaging showed no acute lesion, but perfusion-weighted imaging showed perfusion delay in the left ICA territory. Digital subtraction angiography demonstrated a false lumen and an intraluminal filling defect in proximal segment of the left ICA. Carotid stenting was performed urgently. After carotid stenting, left relative afferent pupillary defect disappeared and TMB was not provoked anymore by upright posture. At discharge, left visual acuity was completely normalized. Because fluctuating visual symptoms in the ICAD may be associated with hemodynamically unstable status, assessment of the perfusion status should be done quickly. Carotid stenting may be helpful to improve the fluctuating visual symptoms and hemodynamically unstable status in selected patient with the ICAD. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  8. Recent advances in 3D SEM surface reconstruction.

    Science.gov (United States)

    Tafti, Ahmad P; Kirkpatrick, Andrew B; Alavi, Zahrasadat; Owen, Heather A; Yu, Zeyun

    2015-11-01

    The scanning electron microscope (SEM), as one of the most commonly used instruments in biology and material sciences, employs electrons instead of light to determine the surface properties of specimens. However, the SEM micrographs still remain 2D images. To effectively measure and visualize the surface attributes, we need to restore the 3D shape model from the SEM images. 3D surface reconstruction is a longstanding topic in microscopy vision as it offers quantitative and visual information for a variety of applications consisting medicine, pharmacology, chemistry, and mechanics. In this paper, we attempt to explain the expanding body of the work in this area, including a discussion of recent techniques and algorithms. With the present work, we also enhance the reliability, accuracy, and speed of 3D SEM surface reconstruction by designing and developing an optimized multi-view framework. We then consider several real-world experiments as well as synthetic data to examine the qualitative and quantitative attributes of our proposed framework. Furthermore, we present a taxonomy of 3D SEM surface reconstruction approaches and address several challenging issues as part of our future work. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Pattern of Ocular Diseases among Computer users in Enugu, Nigeria

    African Journals Online (AJOL)

    7 subjects (1.3%) had monocular blindness with VA<3/60. 37 (3.3%) subjects had low vision with VA < 6/18-3/60. Conclusion: Most of the subjects were young people. Ocular disorders were encountered in computer users. Ocular health status of computer users can be improved through periodic ocular examination and ...

  10. Risk factors for low vision related functioning in the Mycotic Ulcer Treatment Trial: a randomised trial comparing natamycin with voriconazole.

    Science.gov (United States)

    Rose-Nussbaumer, Jennifer; Prajna, N Venkatesh; Krishnan, Tiruvengada; Mascarenhas, Jeena; Rajaraman, Revathi; Srinivasan, Muthiah; Raghavan, Anita; Oldenburg, Catherine E; O'Brien, Kieran S; Ray, Kathryn J; Porco, Travis C; McLeod, Stephen D; Acharya, Nisha R; Keenan, Jeremy D; Lietman, Thomas M

    2016-07-01

    The Mycotic Ulcer Treatment Trial I (MUTT I) was a double-masked, multicentre, randomised controlled trial, which found that topical natamycin is superior to voriconazole for the treatment of filamentous fungal corneal ulcers. In this study, we determine risk factors for low vision-related quality of life in patients with fungal keratitis. The Indian visual function questionnaire (IND-VFQ) was administered to MUTT I study participants at 3 months. Associations between patient and ulcer characteristics and IND-VFQ subscale score were assessed using generalised estimating equations. 323 patients were enrolled in the trial, and 292 (90.4%) completed the IND-VFQ at 3 months. Out of a total possible score of 100, the average VFQ score for all participants was 81.3 (range 0-100, SD 23.6). After correcting for treatment arm, each logMAR line of worse baseline visual acuity in the affected eye resulted in an average 1.2 points decrease on VFQ at 3 months (95% CI -1.8 to 0.6, p<0.001). Those who required therapeutic penetrating keratoplasty had an average of 25.2 points decrease on VFQ after correcting for treatment arm (95% CI -31.8 to -18.5, p<0.001). Study participants who were unemployed had on average 28.5 points decrease on VFQ (95% CI -46.9 to -10.2, p=0.002) after correcting for treatment arm. Monocular vision loss from corneal opacity due to fungal keratitis reduced vision-related quality of life. Given the relatively high worldwide burden of corneal opacity, improving treatment outcomes of corneal infections should be a public health priority. Clinicaltrials.gov Identifier: NCT00996736. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  11. Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

    Science.gov (United States)

    Spirkovska, Lilly; Reid, Max B.

    1994-01-01

    A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.

  12. Monocular Visual Odometry Based on Trifocal Tensor Constraint

    Science.gov (United States)

    Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.

    2018-02-01

    For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.

  13. Predicting Vision-Related Disability in Glaucoma.

    Science.gov (United States)

    Abe, Ricardo Y; Diniz-Filho, Alberto; Costa, Vital P; Wu, Zhichao; Medeiros, Felipe A

    2018-01-01

    To present a new methodology for investigating predictive factors associated with development of vision-related disability in glaucoma. Prospective, observational cohort study. Two hundred thirty-six patients with glaucoma followed up for an average of 4.3±1.5 years. Vision-related disability was assessed by the 25-item National Eye Institute Visual Function Questionnaire (NEI VFQ-25) at baseline and at the end of follow-up. A latent transition analysis model was used to categorize NEI VFQ-25 results and to estimate the probability of developing vision-related disability during follow-up. Patients were tested with standard automated perimetry (SAP) at 6-month intervals, and evaluation of rates of visual field change was performed using mean sensitivity (MS) of the integrated binocular visual field. Baseline disease severity, rate of visual field loss, and duration of follow-up were investigated as predictive factors for development of disability during follow-up. The relationship between baseline and rates of visual field deterioration and the probability of vision-related disability developing during follow-up. At baseline, 67 of 236 (28%) glaucoma patients were classified as disabled based on NEI VFQ-25 results, whereas 169 (72%) were classified as nondisabled. Patients classified as nondisabled at baseline had 14.2% probability of disability developing during follow-up. Rates of visual field loss as estimated by integrated binocular MS were almost 4 times faster for those in whom disability developed versus those in whom it did not (-0.78±1.00 dB/year vs. -0.20±0.47 dB/year, respectively; P disability developing over time (odds ratio [OR], 1.34; 95% confidence interval [CI], 1.06-1.70; P = 0.013). In addition, each 0.5-dB/year faster rate of loss of binocular MS during follow-up was associated with a more than 3.5 times increase in the risk of disability developing (OR, 3.58; 95% CI, 1.56-8.23; P = 0.003). A new methodology for classification and analysis

  14. Flight-appropriate 3D Terrain-rendering Toolkit for Synthetic Vision, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The TerraBlocksTM 3D terrain data format and terrain-block-rendering methodology provides an enabling basis for successful commercial deployment of...

  15. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study

    International Nuclear Information System (INIS)

    Suenaga, Hideyuki; Tran, Huy Hoang; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Takato, Tsuyoshi

    2015-01-01

    This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject’s anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Accurate registration of the volunteer’s anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications. The online version of this article (doi:10.1186/s12880-015-0089-5) contains supplementary material, which is available to authorized users

  16. SLAMM: Visual monocular SLAM with continuous mapping using multiple maps.

    Directory of Open Access Journals (Sweden)

    Hayyan Afeef Daoud

    Full Text Available This paper presents the concept of Simultaneous Localization and Multi-Mapping (SLAMM. It is a system that ensures continuous mapping and information preservation despite failures in tracking due to corrupted frames or sensor's malfunction; making it suitable for real-world applications. It works with single or multiple robots. In a single robot scenario the algorithm generates a new map at the time of tracking failure, and later it merges maps at the event of loop closure. Similarly, maps generated from multiple robots are merged without prior knowledge of their relative poses; which makes this algorithm flexible. The system works in real time at frame-rate speed. The proposed approach was tested on the KITTI and TUM RGB-D public datasets and it showed superior results compared to the state-of-the-arts in calibrated visual monocular keyframe-based SLAM. The mean tracking time is around 22 milliseconds. The initialization is twice as fast as it is in ORB-SLAM, and the retrieved map can reach up to 90 percent more in terms of information preservation depending on tracking loss and loop closure events. For the benefit of the community, the source code along with a framework to be run with Bebop drone are made available at https://github.com/hdaoud/ORBSLAMM.

  17. Bilateral implantation of +2.5 D multifocal intraocular lens and contralateral implantation of +2.5 D and +3.0 D multifocal intraocular lenses: Clinical outcomes.

    Science.gov (United States)

    Nuijts, Rudy M M A; Jonker, Soraya M R; Kaufer, Robert A; Lapid-Gortzak, Ruth; Mendicute, Javier; Martinez, Cristina Peris; Schmickler, Stefanie; Kohnen, Thomas

    2016-02-01

    To assess the clinical visual outcomes of bilateral implantation of Restor +2.5 diopter (D) multifocal intraocular lenses (IOLs) and contralateral implantation of a Restor +2.5 D multifocal IOL in the dominant eye and Restor +3.0 D multifocal IOL in the fellow eye. Multicenter study at 8 investigative sites. Prospective randomized parallel-group patient-masked 2-arm study. This study comprised adults requiring bilateral cataract extraction followed by multifocal IOL implantation. The primary endpoint was corrected intermediate visual acuity (CIVA) at 60 cm, and the secondary endpoint was corrected near visual acuity (CNVA) at 40 cm. Both endpoints were measured 3 months after implantation with a noninferiority margin of Δ = 0.1 logMAR. In total, 103 patients completed the study (53 bilateral, 50 contralateral). At 3 months, the mean CIVA at 60 cm was 0.13 logMAR and 0.10 logMAR in the bilateral group and contralateral group, respectively (difference 0.04 logMAR), achieving noninferiority. Noninferiority was not attained for CNVA at 40 cm; mean values at 3 months for bilateral and contralateral implantation were 0.26 logMAR and 0.11 logMAR, respectively (difference 0.15 logMAR). Binocular defocus curves suggested similar performance in distance vision between the 2 groups. Treatment-emergent ocular adverse events rates were similar between the groups. Bilateral implantation of the +2.5 D multifocal IOL resulted in similar distance as contralateral implantation of the +2.5 D multifocal IOL and +3.0 D multifocal IOL for intermediate vision (60 cm), while noninferiority was not achieved for near distances (40 cm). Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  18. Vision change in a governmental R&D organization : The pioneering legacy as an enduring element

    NARCIS (Netherlands)

    Landau, Dana; Drori, Israel; Porras, Jerry

    The present research demonstrates how a defense R&D organization wishing to deal effectively with a changing reality developed a vision that accommodated somewhat contradictory sets of aspirations and goals: the old and nationalistic together with the new, economically motivated, and market

  19. 2D/3D Visual Tracker for Rover Mast

    Science.gov (United States)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems

  20. Synthetic vision and memory model for virtual human - biomed 2010.

    Science.gov (United States)

    Zhao, Yue; Kang, Jinsheng; Wright, David

    2010-01-01

    This paper describes the methods and case studies of a novel synthetic vision and memory model for virtual human. The synthetic vision module simulates the biological / optical abilities and limitations of the human vision. The module is based on a series of collision detection between the boundary of virtual humans field of vision (FOV) volume and the surface of objects in a recreated 3D environment. The memory module simulates a short-term memory capability by employing a simplified memory structure (first-in-first-out stack). The synthetic vision and memory model has been integrated into a virtual human modelling project, Intelligent Virtual Modelling. The project aimed to improve the realism and autonomy of virtual humans.

  1. Transient monocular blindness and the risk of vascular complications according to subtype : a prospective cohort study

    NARCIS (Netherlands)

    Volkers, Eline J; Donders, Richard C J M; Koudstaal, Peter J; van Gijn, Jan; Algra, Ale; Jaap Kappelle, L

    Patients with transient monocular blindness (TMB) can present with many different symptoms, and diagnosis is usually based on the history alone. In this study, we assessed the risk of vascular complications according to different characteristics of TMB. We prospectively studied 341 consecutive

  2. Transient monocular blindness and the risk of vascular complications according to subtype: a prospective cohort study

    NARCIS (Netherlands)

    Volkers, E.J. (Eline J.); R. Donders (Rogier); P.J. Koudstaal (Peter Jan); van Gijn, J. (Jan); A. Algra (Ale); L. Jaap Kappelle

    2016-01-01

    textabstractPatients with transient monocular blindness (TMB) can present with many different symptoms, and diagnosis is usually based on the history alone. In this study, we assessed the risk of vascular complications according to different characteristics of TMB. We prospectively studied 341

  3. a Variant of Lsd-Slam Capable of Processing High-Speed Low-Framerate Monocular Datasets

    Science.gov (United States)

    Schmid, S.; Fritsch, D.

    2017-11-01

    We develop a new variant of LSD-SLAM, called C-LSD-SLAM, which is capable of performing monocular tracking and mapping in high-speed low-framerate situations such as those of the KITTI datasets. The methods used here are robust against the influence of erronously triangulated points near the epipolar direction, which otherwise causes tracking divergence.

  4. 3D DIGITAL CADASTRE JOURNEY IN VICTORIA, AUSTRALIA

    Directory of Open Access Journals (Sweden)

    D. Shojaei

    2017-10-01

    Full Text Available Land development processes today have an increasing demand to access three-dimensional (3D spatial information. Complex land development may need to have a 3D model and require some functions which are only possible using 3D data. Accordingly, the Intergovernmental Committee on Surveying and Mapping (ICSM, as a national body in Australia provides leadership, coordination and standards for surveying, mapping and national datasets has developed the Cadastre 2034 strategy in 2014. This strategy has a vision to develop a cadastral system that enables people to readily and confidently identify the location and extent of all rights, restrictions and responsibilities related to land and real property. In 2014, the land authority in the state of Victoria, Australia, namely Land Use Victoria (LUV, has entered the challenging area of designing and implementing a 3D digital cadastre focused on providing more efficient and effective services to the land and property industry. LUV has been following the ICSM 2034 strategy which requires developing various policies, standards, infrastructures, and tools. Over the past three years, LUV has mainly focused on investigating the technical aspect of a 3D digital cadastre. This paper provides an overview of the 3D digital cadastre investigation progress in Victoria and discusses the challenges that the team faced during this journey. It also addresses the future path to develop an integrated 3D digital cadastre in Victoria.

  5. A semi-interactive panorama based 3D reconstruction framework for indoor scenes

    NARCIS (Netherlands)

    Dang, T.K.; Worring, M.; Bui, T.D.

    2011-01-01

    We present a semi-interactive method for 3D reconstruction specialized for indoor scenes which combines computer vision techniques with efficient interaction. We use panoramas, popularly used for visualization of indoor scenes, but clearly not able to show depth, for their great field of view, as

  6. Novel Mobile Robot Simultaneous Localization and Mapping Using Rao-Blackwellised Particle Filter

    Directory of Open Access Journals (Sweden)

    Hong Bingrong

    2008-11-01

    Full Text Available This paper presents the novel method of mobile robot simultaneous localization and mapping (SLAM, which is implemented by using the Rao-Blackwellised particle filter (RBPF for monocular vision-based autonomous robot in unknown indoor environment. The particle filter is combined with unscented Kalman filter (UKF to extending the path posterior by sampling new poses that integrate the current observation. The landmark position estimation and update is implemented through the unscented transform (UT. Furthermore, the number of resampling steps is determined adaptively, which seriously reduces the particle depletion problem. Monocular CCD camera mounted on the robot tracks the 3D natural point landmarks, which are structured with matching image feature pairs extracted through Scale Invariant Feature Transform (SIFT. The matching for multi-dimension SIFT features which are highly distinctive due to a special descriptor is implemented with a KDTree in the time cost of O(log2N. Experiments on the robot Pioneer3 in our real indoor environment show that our method is of high precision and stability.

  7. Novel Mobile Robot Simultaneous Loclization and Mapping Using Rao-Blackwellised Particle Filter

    Directory of Open Access Journals (Sweden)

    Li Maohai

    2006-09-01

    Full Text Available This paper presents the novel method of mobile robot simultaneous localization and mapping (SLAM, which is implemented by using the Rao-Blackwellised particle filter (RBPF for monocular vision-based autonomous robot in unknown indoor environment. The particle filter is combined with unscented Kalman filter (UKF to extending the path posterior by sampling new poses that integrate the current observation. The landmark position estimation and update is implemented through the unscented transform (UT. Furthermore, the number of resampling steps is determined adaptively, which seriously reduces the particle depletion problem. Monocular CCD camera mounted on the robot tracks the 3D natural point landmarks, which are structured with matching image feature pairs extracted through Scale Invariant Feature Transform (SIFT. The matching for multi-dimension SIFT features which are highly distinctive due to a special descriptor is implemented with a KD-Tree in the time cost of O(log2N. Experiments on the robot Pioneer3 in our real indoor environment show that our method is of high precision and stability.

  8. Making Things See 3D vision with Kinect, Processing, Arduino, and MakerBot

    CERN Document Server

    Borenstein, Greg

    2012-01-01

    This detailed, hands-on guide provides the technical and conceptual information you need to build cool applications with Microsoft's Kinect, the amazing motion-sensing device that enables computers to see. Through half a dozen meaty projects, you'll learn how to create gestural interfaces for software, use motion capture for easy 3D character animation, 3D scanning for custom fabrication, and many other applications. Perfect for hobbyists, makers, artists, and gamers, Making Things See shows you how to build every project with inexpensive off-the-shelf components, including the open source P

  9. 3D asthenopia in horizontal deviation.

    Science.gov (United States)

    Kim, Seung-Hyun; Suh, Young-Woo; Yun, Cheol-Min; Yoo, Eun-Joo; Yeom, Ji-Hyun; Cho, Yoonae A

    2013-05-01

    This study was conducted to investigate the asthenopic symptoms in patients with exotropia and esotropia while watching stereoscopic 3D (S3D) television (TV). A total 77 subjects who more than 9 years of age were enrolled in this study. We divided them into three groups; Thirty-four patients with exodeviation (Exo group), 11 patients with esodeviation (Eso group) and 32 volunteers with normal binocular vision (control group). The S3D images were shown to all patients with S3D high-definition TV for a period of 20 min. Best corrected visual acuity, refractive errors, angle of strabismus, stereopsis test and history of strabismus surgery, were evaluated. After watching S3D TV for 20 min, a survey of subjective symptoms was conducted with a questionnaire to evaluate the degree of S3D perception and asthenopic symptoms such as headache, dizziness and ocular fatigue while watching 3D TV. The mean amounts of deviation in the Exo group and Eso group were 11.2 PD and 7.73PD, respectively. Mean stereoacuity was 102.7 arc sec in the the Exo group and 1389.1 arc sec in the Eso group. In the control group, it was 41.9 arc sec. Twenty-nine patients in the Exo group showed excellent stereopsis (≤60 arc sec at near), but all 11 subjects of the Eso group showed 140 arc sec or worse and showed more decreased 3D perception than the Exo and the control group (p Kruskal-Wallis test). The Exo group reported more eye fatigue (p Kruskal-Wallis test) than the Eso and the control group. However, the scores of ocular fatigue in the patients who had undergone corrective surgery were less than in the patients who had not in the Exo group (p Kruskal-Wallis test) and the amount of exodeviation was not correlated with the asthenopic symptoms (dizziness, r = 0.034, p = 0.33; headache, r = 0.320, p = 0.119; eye fatigue, r = 0.135, p = 0.519, Spearman rank correlation test, respectively). Symptoms of 3D asthenopia were related to the presence of exodeviation but not to esodeviation. This may

  10. Normative monocular visual acuity for early treatment diabetic retinopathy study charts in emmetropic children 5 to 12 years of age.

    Science.gov (United States)

    Dobson, Velma; Clifford-Donaldson, Candice E; Green, Tina K; Miller, Joseph M; Harvey, Erin M

    2009-07-01

    To provide normative data for children tested with Early Treatment Diabetic Retinopathy Study (ETDRS) charts. Cross-sectional study. A total of 252 Native American (Tohono O'odham) children aged 5 to 12 years. On the basis of cycloplegic refraction conducted on the day of testing, all were emmetropic (myopia < or =0.25 diopter [D] spherical equivalent, hyperopia < or =1.00 D spherical equivalent, and astigmatism < or =0.50 D in both eyes). Monocular visual acuity was tested at 4 m, using 1 ETDRS chart for the right eye (RE) and another for the left eye (LE). Visual acuity was scored as the total number of letters correctly identified, by naming or matching to letters on a lap card, and as the smallest letter size for which the child identified 3 of 5 letters correctly. Visual acuity results did not differ for the RE versus the LE, so data are reported for the RE only. Mean visual acuity for 5-year-olds (0.16 logarithm of the minimum angle of resolution [logMAR] [20/29]) was significantly worse than for 8-, 9-, 10-, 11-, and 12-year-olds (0.05 logMAR [20/22] or better at each age). The lower 95% prediction limit for determining whether a child has visual acuity within the normal range was 0.38 (20/48) for 5-year-olds and 0.30 (20/40) for 6- to 12-year-olds, which was reduced to 0.32 (20/42) for 5-year-olds and 0.21 (20/32) for 6- to 12-year-olds when recalculated with outlying data points removed. Mean interocular acuity difference did not vary by age, averaging less than 1 logMAR line at each age, with a lower 95% prediction limit of 0.17 log unit (1.7 logMAR lines) across all ages. For monocular visual acuity based on ETDRS charts to be in the normal range, it must be better than 20/50 for 5-year-olds and better than 20/40 for 6- to 12-year-olds. Normal interocular acuity difference includes values of less than 2 logMAR lines. Normative ETDRS visual acuity values are not as good as norms reported for adults, suggesting that a child's visual acuity results should

  11. Audible vision for the blind and visually impaired in indoor open spaces.

    Science.gov (United States)

    Yu, Xunyi; Ganz, Aura

    2012-01-01

    In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people.

  12. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  13. Pour mesurer le débit de l'Indus : un nouveau système de prévision ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    15 juil. 2011 ... Un partenariat de recherche entre le Pakistan et le Canada a mené au lancement d'un système de prévision très perfectionné qui promet d'aider les autorités pakistanaises à mesurer avec précision le débit de l'Indus, principale artère d'un des plus grands réseaux d'irrigation du monde.

  14. Disambiguation of Necker cube rotation by monocular and binocular depth cues: relative effectiveness for establishing long-term bias.

    Science.gov (United States)

    Harrison, Sarah J; Backus, Benjamin T; Jain, Anshul

    2011-05-11

    The apparent direction of rotation of perceptually bistable wire-frame (Necker) cubes can be conditioned to depend on retinal location by interleaving their presentation with cubes that are disambiguated by depth cues (Haijiang, Saunders, Stone, & Backus, 2006; Harrison & Backus, 2010a). The long-term nature of the learned bias is demonstrated by resistance to counter-conditioning on a consecutive day. In previous work, either binocular disparity and occlusion, or a combination of monocular depth cues that included occlusion, internal occlusion, haze, and depth-from-shading, were used to control the rotation direction of disambiguated cubes. Here, we test the relative effectiveness of these two sets of depth cues in establishing the retinal location bias. Both cue sets were highly effective in establishing a perceptual bias on Day 1 as measured by the perceived rotation direction of ambiguous cubes. The effect of counter-conditioning on Day 2, on perceptual outcome for ambiguous cubes, was independent of whether the cue set was the same or different as Day 1. This invariance suggests that a common neural population instantiates the bias for rotation direction, regardless of the cue set used. However, in a further experiment where only disambiguated cubes were presented on Day 1, perceptual outcome of ambiguous cubes during Day 2 counter-conditioning showed that the monocular-only cue set was in fact more effective than disparity-plus-occlusion for causing long-term learning of the bias. These results can be reconciled if the conditioning effect of Day 1 ambiguous trials in the first experiment is taken into account (Harrison & Backus, 2010b). We suggest that monocular disambiguation leads to stronger bias either because it more strongly activates a single neural population that is necessary for perceiving rotation, or because ambiguous stimuli engage cortical areas that are also engaged by monocularly disambiguated stimuli but not by disparity-disambiguated stimuli

  15. Novel Low Cost 3D Surface Model Reconstruction System for Plant Phenotyping

    Directory of Open Access Journals (Sweden)

    Suxing Liu

    2017-09-01

    Full Text Available Accurate high-resolution three-dimensional (3D models are essential for a non-invasive analysis of phenotypic characteristics of plants. Previous limitations in 3D computer vision algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present an image-based 3D plant reconstruction system that can be achieved by using a single camera and a rotation stand. Our method is based on the structure from motion method, with a SIFT image feature descriptor. In order to improve the quality of the 3D models, we segmented the plant objects based on the PlantCV platform. We also deducted the optimal number of images needed for reconstructing a high-quality model. Experiments showed that an accurate 3D model of the plant was successfully could be reconstructed by our approach. This 3D surface model reconstruction system provides a simple and accurate computational platform for non-destructive, plant phenotyping.

  16. 3D exploitation of large urban photo archives

    Science.gov (United States)

    Cho, Peter; Snavely, Noah; Anderson, Ross

    2010-04-01

    Recent work in computer vision has demonstrated the potential to automatically recover camera and scene geometry from large collections of uncooperatively-collected photos. At the same time, aerial ladar and Geographic Information System (GIS) data are becoming more readily accessible. In this paper, we present a system for fusing these data sources in order to transfer 3D and GIS information into outdoor urban imagery. Applying this system to 1000+ pictures shot of the lower Manhattan skyline and the Statue of Liberty, we present two proof-of-concept examples of geometry-based photo enhancement which are difficult to perform via conventional image processing: feature annotation and image-based querying. In these examples, high-level knowledge projects from 3D world-space into georegistered 2D image planes and/or propagates between different photos. Such automatic capabilities lay the groundwork for future real-time labeling of imagery shot in complex city environments by mobile smart phones.

  17. Vision-based Vehicle Detection Survey

    Directory of Open Access Journals (Sweden)

    Alex David S

    2016-03-01

    Full Text Available Nowadays thousands of drivers and passengers were losing their lives every year on road accident, due to deadly crashes between more than one vehicle. There are number of many research focuses were dedicated to the development of intellectual driver assistance systems and autonomous vehicles over the past decade, which reduces the danger by monitoring the on-road environment. In particular, researchers attracted towards the on-road detection of vehicles in recent years. Different parameters have been analyzed in this paper which includes camera placement and the various applications of monocular vehicle detection, common features and common classification methods, motion- based approaches and nighttime vehicle detection and monocular pose estimation. Previous works on the vehicle detection listed based on camera poisons, feature based detection and motion based detection works and night time detection.

  18. Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles.

    Science.gov (United States)

    Zhang, Linhuan; Ahamed, Tofael; Zhang, Yan; Gao, Pengbo; Takigawa, Tomohiro

    2016-04-22

    The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional-integral-derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle.

  19. Cortical Dynamics of Figure-Ground Separation in Response to 2D Pictures and 3D Scenes: How V2 Combines Border Ownership, Stereoscopic Cues, and Gestalt Grouping Rules

    OpenAIRE

    Grossberg, Stephen

    2016-01-01

    The FACADE model, and its laminar cortical realization and extension in the 3D LAMINART model, have explained, simulated, and predicted many perceptual and neurobiological data about how the visual cortex carries out 3D vision and figure-ground perception, and how these cortical mechanisms enable 2D pictures to generate 3D percepts of occluding and occluded objects. In particular, these models have proposed how border ownership occurs, but have not yet explicitly explained the correlation bet...

  20. Color vision deficiencies and the child's willingness for visual activity: preliminary research

    Science.gov (United States)

    Geniusz, Malwina; Szmigiel, Marta; Geniusz, Maciej

    2017-09-01

    After a few weeks a newborn baby can recognize high contrasts in colors like black and white. They reach full color vision at the age of circa six months. Matching colors is the next milestone. Most children can do it at the age of two. Good color vision is one of the factors which indicate proper development of a child. Presented research shows the correlation between color vision and visual activity. The color vision of a group of children aged 3-8 was examined with saturated Farnsworth D-15. Fransworth test was performed twice - in a standard version and in a magnetic version. The time of completing standard and magnetic tests was measured. Furthermore, parents of subjects answered questions checking the children's visual activity in 1 - 10 scale. Parents stated whether the child willingly watched books, colored coloring books, put puzzles or liked to play with blocks etc. The Fransworth D-15 test designed for color vision testing can be used to test younger children from the age of 3 years. These are preliminary studies which may be a useful tool for further, more accurate examination on a larger group of subjects.

  1. Performance of human observers and an automatic 3-dimensional computer-vision-based locomotion scoring method to detect lameness and hoof lesions in dairy cows

    NARCIS (Netherlands)

    Schlageter-Tello, Andrés; Hertem, Van Tom; Bokkers, Eddie A.M.; Viazzi, Stefano; Bahr, Claudia; Lokhorst, Kees

    2018-01-01

    The objective of this study was to determine if a 3-dimensional computer vision automatic locomotion scoring (3D-ALS) method was able to outperform human observers for classifying cows as lame or nonlame and for detecting cows affected and nonaffected by specific type(s) of hoof lesion. Data

  2. Photogrammetric computer vision statistics, geometry, orientation and reconstruction

    CERN Document Server

    Förstner, Wolfgang

    2016-01-01

    This textbook offers a statistical view on the geometry of multiple view analysis, required for camera calibration and orientation and for geometric scene reconstruction based on geometric image features. The authors have backgrounds in geodesy and also long experience with development and research in computer vision, and this is the first book to present a joint approach from the converging fields of photogrammetry and computer vision. Part I of the book provides an introduction to estimation theory, covering aspects such as Bayesian estimation, variance components, and sequential estimation, with a focus on the statistically sound diagnostics of estimation results essential in vision metrology. Part II provides tools for 2D and 3D geometric reasoning using projective geometry. This includes oriented projective geometry and tools for statistically optimal estimation and test of geometric entities and transformations and their rela­tions, tools that are useful also in the context of uncertain reasoning in po...

  3. Effects of visual skills training, vision coaching and sports vision ...

    African Journals Online (AJOL)

    The purpose of this study was to determine the effectiveness of three different approaches to improving sports performance through improvements in “sports vision:” (1) a visual skills training programme, (2) traditional vision coaching sessions, and (3) a multi-disciplinary approach identified as sports vision dynamics.

  4. Low computation vision-based navigation for a Martian rover

    Science.gov (United States)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  5. Development of a Vision-Based Robotic Follower Vehicle

    Science.gov (United States)

    2009-02-01

    resultant blob . . . . . . . . . . 14 Figure 13: A sample image and the recognized keypoints found using the SIFT algorithm...Figure 12: An example of a spherical target and the resultant blob (taken from [66]). To track multi-coloured objects, rather than using just one...International Journal of Advanced Robotic Systems, 2(3), 245–250. [37] Zhou, J. and Clark, C. (2006), Autonomous fish tracking by ROV using Monocular

  6. A cognitive approach to vision for a mobile robot

    Science.gov (United States)

    Benjamin, D. Paul; Funk, Christopher; Lyons, Damian

    2013-05-01

    We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both

  7. Special effects used in creating 3D animated scenes-part 1

    Science.gov (United States)

    Avramescu, A. M.

    2015-11-01

    In present, with the help of computer, we can create special effects that look so real that we almost don't perceive them as being different. These special effects are somehow hard to differentiate from the real elements like those on the screen. With the increasingly accesible 3D field that has more and more areas of application, the 3D technology goes easily from architecture to product designing. Real like 3D animations are used as means of learning, for multimedia presentations of big global corporations, for special effects and even for virtual actors in movies. Technology, as part of the movie art, is considered a prerequisite but the cinematography is the first art that had to wait for the correct intersection of technological development, innovation and human vision in order to attain full achievement. Increasingly more often, the majority of industries is using 3D sequences (three dimensional). 3D represented graphics, commercials and special effects from movies are all designed in 3D. The key for attaining real visual effects is to successfully combine various distinct elements: characters, objects, images and video scenes; like all these elements represent a whole that works in perfect harmony. This article aims to exhibit a game design from these days. Considering the advanced technology and futuristic vision of designers, nowadays we have different and multifarious game models. Special effects are decisively contributing in the creation of a realistic three-dimensional scene. These effects are essential for transmitting the emotional state of the scene. Creating the special effects is a work of finesse in order to achieve high quality scenes. Special effects can be used to get the attention of the onlooker on an object from a scene. Out of the conducted study, the best-selling game of the year 2010 was Call of Duty: Modern Warfare 2. This way, the article aims for the presented scene to be similar with many locations from this type of games, more

  8. Monocular perceptual learning of contrast detection facilitates binocular combination in adults with anisometropic amblyopia

    OpenAIRE

    Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin

    2016-01-01

    Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n?=?13) were asked to complete two psychophysical supra-threshold binoc...

  9. Vision based flight procedure stereo display system

    Science.gov (United States)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  10. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement.

    Science.gov (United States)

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-04-28

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the 'phase to 3D coordinates transformation' are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement.

  11. A novel binary shape context for 3D local surface description

    Science.gov (United States)

    Dong, Zhen; Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Li, Bijun; Zang, Yufu

    2017-08-01

    3D local surface description is now at the core of many computer vision technologies, such as 3D object recognition, intelligent driving, and 3D model reconstruction. However, most of the existing 3D feature descriptors still suffer from low descriptiveness, weak robustness, and inefficiency in both time and memory. To overcome these challenges, this paper presents a robust and descriptive 3D Binary Shape Context (BSC) descriptor with high efficiency in both time and memory. First, a novel BSC descriptor is generated for 3D local surface description, and the performance of the BSC descriptor under different settings of its parameters is analyzed. Next, the descriptiveness, robustness, and efficiency in both time and memory of the BSC descriptor are evaluated and compared to those of several state-of-the-art 3D feature descriptors. Finally, the performance of the BSC descriptor for 3D object recognition is also evaluated on a number of popular benchmark datasets, and an urban-scene dataset is collected by a terrestrial laser scanner system. Comprehensive experiments demonstrate that the proposed BSC descriptor obtained high descriptiveness, strong robustness, and high efficiency in both time and memory and achieved high recognition rates of 94.8%, 94.1% and 82.1% on the considered UWA, Queen, and WHU datasets, respectively.

  12. Individualization of 2D color maps for people with color vision deficiencies

    KAUST Repository

    Waldin, Nicholas; Bernhard, Matthias; Rautek, Peter; Viola, Ivan

    2016-01-01

    2D color maps are often used to visually encode complex data characteristics such as heat or height. The comprehension of color maps in visualization is affected by the display (e.g., a monitor) and the perceptual abilities of the viewer. People with color vision deficiencies, such as red-green blindness, face difficulties when using conventional color maps. We propose a novel method for adapting a color map to an individual person, by having the user sort lines extracted from a given color map.

  13. Individualization of 2D color maps for people with color vision deficiencies

    KAUST Repository

    Waldin, Nicholas

    2016-12-13

    2D color maps are often used to visually encode complex data characteristics such as heat or height. The comprehension of color maps in visualization is affected by the display (e.g., a monitor) and the perceptual abilities of the viewer. People with color vision deficiencies, such as red-green blindness, face difficulties when using conventional color maps. We propose a novel method for adapting a color map to an individual person, by having the user sort lines extracted from a given color map.

  14. A framework for breast cancer visualization using augmented reality x-ray vision technique in mobile technology

    Science.gov (United States)

    Rahman, Hameedur; Arshad, Haslina; Mahmud, Rozi; Mahayuddin, Zainal Rasyid

    2017-10-01

    Breast Cancer patients who require breast biopsy has increased over the past years. Augmented Reality guided core biopsy of breast has become the method of choice for researchers. However, this cancer visualization has limitations to the extent of superimposing the 3D imaging data only. In this paper, we are introducing an Augmented Reality visualization framework that enables breast cancer biopsy image guidance by using X-Ray vision technique on a mobile display. This framework consists of 4 phases where it initially acquires the image from CT/MRI and process the medical images into 3D slices, secondly it will purify these 3D grayscale slices into 3D breast tumor model using 3D modeling reconstruction technique. Further, in visualization processing this virtual 3D breast tumor model has been enhanced using X-ray vision technique to see through the skin of the phantom and the final composition of it is displayed on handheld device to optimize the accuracy of the visualization in six degree of freedom. The framework is perceived as an improved visualization experience because the Augmented Reality x-ray vision allowed direct understanding of the breast tumor beyond the visible surface and direct guidance towards accurate biopsy targets.

  15. Design and Analysis of a Single—Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs

    Directory of Open Access Journals (Sweden)

    Carlos Jaramillo

    2016-02-01

    Full Text Available We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo vision system applied to Micro Aerial Vehicles (MAVs. The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration. We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads. The theoretical single viewpoint (SVP constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion. We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.

  16. Image-Based 3D Face Modeling System

    Directory of Open Access Journals (Sweden)

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  17. Virtual reality and 3D animation in forensic visualization.

    Science.gov (United States)

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.

  18. Data-Fusion for a Vision-Aided Radiological Detection System: Sensor dependence and Source Tracking

    Science.gov (United States)

    Stadnikia, Kelsey; Martin, Allan; Henderson, Kristofer; Koppal, Sanjeev; Enqvist, Andreas

    2018-01-01

    The University of Florida is taking a multidisciplinary approach to fuse the data between 3D vision sensors and radiological sensors in hopes of creating a system capable of not only detecting the presence of a radiological threat, but also tracking it. The key to developing such a vision-aided radiological detection system, lies in the count rate being inversely dependent on the square of the distance. Presented in this paper are the results of the calibration algorithm used to predict the location of the radiological detectors based on 3D distance from the source to the detector (vision data) and the detectors count rate (radiological data). Also presented are the results of two correlation methods used to explore source tracking.

  19. Fast and flexible 3D object recognition solutions for machine vision applications

    Science.gov (United States)

    Effenberger, Ira; Kühnle, Jens; Verl, Alexander

    2013-03-01

    In automation and handling engineering, supplying work pieces between different stages along the production process chain is of special interest. Often the parts are stored unordered in bins or lattice boxes and hence have to be separated and ordered for feeding purposes. An alternative to complex and spacious mechanical systems such as bowl feeders or conveyor belts, which are typically adapted to the parts' geometry, is using a robot to grip the work pieces out of a bin or from a belt. Such applications are in need of reliable and precise computer-aided object detection and localization systems. For a restricted range of parts, there exists a variety of 2D image processing algorithms that solve the recognition problem. However, these methods are often not well suited for the localization of randomly stored parts. In this paper we present a fast and flexible 3D object recognizer that localizes objects by identifying primitive features within the objects. Since technical work pieces typically consist to a substantial degree of geometric primitives such as planes, cylinders and cones, such features usually carry enough information in order to determine the position of the entire object. Our algorithms use 3D best-fitting combined with an intelligent data pre-processing step. The capability and performance of this approach is shown by applying the algorithms to real data sets of different industrial test parts in a prototypical bin picking demonstration system.

  20. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles

    Directory of Open Access Journals (Sweden)

    Miguel Angel Olivares-Mendez

    2016-03-01

    Full Text Available Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption.

  1. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles.

    Science.gov (United States)

    Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger

    2016-03-11

    Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption.

  2. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles

    Science.gov (United States)

    Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger

    2016-01-01

    Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption. PMID:26978365

  3. Identification of geometric faces in hand-sketched 3D objects containing curved lines

    Science.gov (United States)

    El-Sayed, Ahmed M.; Wahdan, A. A.; Youssif, Aliaa A. A.

    2017-07-01

    The reconstruction of 3D objects from 2D line drawings is regarded as one of the key topics in the field of computer vision. The ongoing research is mainly focusing on the reconstruction of 3D objects that are mapped only from 2D straight lines, and that are symmetric in nature. Commonly, this approach only produces basic and simple shapes that are mostly flat or rather polygonized in nature, which is normally attributed to inability to handle curves. To overcome the above-mentioned limitations, a technique capable of handling non-symmetric drawings that encompass curves is considered. This paper discusses a novel technique that can be used to reconstruct 3D objects containing curved lines. In addition, it highlights an application that has been developed in accordance with the suggested technique that can convert a freehand sketch to a 3D shape using a mobile phone.

  4. Reduced vision in highly myopic eyes without ocular pathology: the ZOC-BHVI high myopia study.

    Science.gov (United States)

    Jong, Monica; Sankaridurg, Padmaja; Li, Wayne; Resnikoff, Serge; Naidoo, Kovin; He, Mingguang

    2018-01-01

    The aim was to investigate the relationship of the magnitude of myopia with visual acuity in highly myopic eyes without ocular pathology. Twelve hundred and ninety-two highly myopic eyes (up to -6.00 DS both eyes, no astigmatic cut-off) with no ocular pathology from the ZOC-BHVI high myopia study in China, had cycloplegic refraction, followed by subjective refraction and visual acuities and axial length measurement. Two logistic regression models were undertaken to test the association of age, gender, refractive error, axial length and parental myopia with reduced vision. Mean group age was 19.0 ± 8.6 years; subjective spherical equivalent refractive error was -9.03 ± 2.73 D; objective spherical equivalent refractive error was -8.90 ± 2.60 D and axial length was 27.0 ± 1.3 mm. Using visual acuity, 82.4 per cent had normal vision, 16.0 per cent had mildly reduced vision, 1.2 per cent had moderately reduced vision, 0.3 per cent had severely reduced vision and no subjects were blind. The percentage with reduced vision increased with spherical equivalent to 74.5 per cent from -15.00 to -39.99 D, axial length to 67.7 per cent of eyes from 30.01 to 32.00 mm and age to 22.9 per cent of those 41 years and over. Spherical equivalent and axial length were significantly associated with reduced vision (p vision. Gender was significant for one model (p = 0.04). Mildly reduced vision is common in high myopia without ocular pathology and is strongly correlated with greater magnitudes of refractive error and axial length. Better understanding is required to minimise reduced vision in high myopes. © 2017 Optometry Australia.

  5. A new form of rapid binocular plasticity in adult with amblyopia

    OpenAIRE

    Zhou, Jiawei; Thompson, Benjamin; Hess, Robert F.

    2013-01-01

    Amblyopia is a neurological disorder of binocular vision affecting up to 3% of the population resulting from a disrupted period of early visual development. Recently, it has been shown that vision can be partially restored by intensive monocular or dichoptic training (4?6 weeks). This can occur even in adults owing to a residual degree of brain plasticity initiated by repetitive and successive sensory stimulation. Here we show that the binocular imbalance that characterizes amblyopia can be r...

  6. Bio-inspired vision

    International Nuclear Information System (INIS)

    Posch, C

    2012-01-01

    Nature still outperforms the most powerful computers in routine functions involving perception, sensing and actuation like vision, audition, and motion control, and is, most strikingly, orders of magnitude more energy-efficient than its artificial competitors. The reasons for the superior performance of biological systems are subject to diverse investigations, but it is clear that the form of hardware and the style of computation in nervous systems are fundamentally different from what is used in artificial synchronous information processing systems. Very generally speaking, biological neural systems rely on a large number of relatively simple, slow and unreliable processing elements and obtain performance and robustness from a massively parallel principle of operation and a high level of redundancy where the failure of single elements usually does not induce any observable system performance degradation. In the late 1980's, Carver Mead demonstrated that silicon VLSI technology can be employed in implementing ''neuromorphic'' circuits that mimic neural functions and fabricating building blocks that work like their biological role models. Neuromorphic systems, as the biological systems they model, are adaptive, fault-tolerant and scalable, and process information using energy-efficient, asynchronous, event-driven methods. In this paper, some basics of neuromorphic electronic engineering and its impact on recent developments in optical sensing and artificial vision are presented. It is demonstrated that bio-inspired vision systems have the potential to outperform conventional, frame-based vision acquisition and processing systems in many application fields and to establish new benchmarks in terms of redundancy suppression/data compression, dynamic range, temporal resolution and power efficiency to realize advanced functionality like 3D vision, object tracking, motor control, visual feedback loops, etc. in real-time. It is argued that future artificial vision systems

  7. Project Photofly: New 3d Modeling Online Web Service (case Studies and Assessments)

    Science.gov (United States)

    Abate, D.; Furini, G.; Migliori, S.; Pierattini, S.

    2011-09-01

    During summer 2010, Autodesk has released a still ongoing project called Project Photofly, freely downloadable from AutodeskLab web site until August 1 2011. Project Photofly based on computer-vision and photogrammetric principles, exploiting the power of cloud computing, is a web service able to convert collections of photographs into 3D models. Aim of our research was to evaluate the Project Photofly, through different case studies, for 3D modeling of cultural heritage monuments and objects, mostly to identify for which goals and objects it is suitable. The automatic approach will be mainly analyzed.

  8. Gamma-Ray imaging for nuclear security and safety: Towards 3-D gamma-ray vision

    Science.gov (United States)

    Vetter, Kai; Barnowksi, Ross; Haefner, Andrew; Joshi, Tenzing H. Y.; Pavlovsky, Ryan; Quiter, Brian J.

    2018-01-01

    The development of portable gamma-ray imaging instruments in combination with the recent advances in sensor and related computer vision technologies enable unprecedented capabilities in the detection, localization, and mapping of radiological and nuclear materials in complex environments relevant for nuclear security and safety. Though multi-modal imaging has been established in medicine and biomedical imaging for some time, the potential of multi-modal data fusion for radiological localization and mapping problems in complex indoor and outdoor environments remains to be explored in detail. In contrast to the well-defined settings in medical or biological imaging associated with small field-of-view and well-constrained extension of the radiation field, in many radiological search and mapping scenarios, the radiation fields are not constrained and objects and sources are not necessarily known prior to the measurement. The ability to fuse radiological with contextual or scene data in three dimensions, in analog to radiological and functional imaging with anatomical fusion in medicine, provides new capabilities enhancing image clarity, context, quantitative estimates, and visualization of the data products. We have developed new means to register and fuse gamma-ray imaging with contextual data from portable or moving platforms. These developments enhance detection and mapping capabilities as well as provide unprecedented visualization of complex radiation fields, moving us one step closer to the realization of gamma-ray vision in three dimensions.

  9. Rethinking GIS Towards The Vision Of Smart Cities Through CityGML

    Science.gov (United States)

    Guney, C.

    2016-10-01

    Smart cities present a substantial growth opportunity in the coming years. The role of GIS in the smart city ecosystem is to integrate different data acquired by sensors in real time and provide better decisions, more efficiency and improved collaboration. Semantically enriched vision of GIS will help evolve smart cities into tomorrow's much smarter cities since geospatial/location data and applications may be recognized as a key ingredient of smart city vision. However, it is need for the Geospatial Information communities to debate on "Is 3D Web and mobile GIS technology ready for smart cities?" This research places an emphasis on the challenges of virtual 3D city models on the road to smarter cities.

  10. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Science.gov (United States)

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-01-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109

  11. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Yanhua Jiang

    2014-09-01

    Full Text Available This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments.

  12. Browse Title Index

    African Journals Online (AJOL)

    Items 51 - 73 of 73 ... Journal Home > Advanced Search > Browse Title Index ... Vol 13 (2006), The ageing eye” functional changes from cradle to gray: A ... Vol 12 (2005), The evaluation of vision in children using monocular vision acuity and ...

  13. Overview of fast algorithm in 3D dynamic holographic display

    Science.gov (United States)

    Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian

    2013-08-01

    3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.

  14. A flexible 3D laser scanning system using a robotic arm

    Science.gov (United States)

    Fei, Zixuan; Zhou, Xiang; Gao, Xiaofei; Zhang, Guanliang

    2017-06-01

    In this paper, we present a flexible 3D scanning system based on a MEMS scanner mounted on an industrial arm with a turntable. This system has 7-degrees of freedom and is able to conduct a full field scan from any angle, suitable for scanning object with the complex shape. The existing non-contact 3D scanning system usually uses laser scanner that projects fixed stripe mounted on the Coordinate Measuring Machine (CMM) or industrial robot. These existing systems can't perform path planning without CAD models. The 3D scanning system presented in this paper can scan the object without CAD models, and we introduced this path planning method in the paper. We also propose a practical approach to calibrating the hand-in-eye system based on binocular stereo vision and analyzes the errors of the hand-eye calibration.

  15. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    Science.gov (United States)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  16. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement

    Directory of Open Access Journals (Sweden)

    Suzhi Xiao

    2016-04-01

    Full Text Available In order to acquire an accurate three-dimensional (3D measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement.

  17. Human body motion tracking based on quantum-inspired immune cloning algorithm

    Science.gov (United States)

    Han, Hong; Yue, Lichuan; Jiao, Licheng; Wu, Xing

    2009-10-01

    In a static monocular camera system, to gain a perfect 3D human body posture is a great challenge for Computer Vision technology now. This paper presented human postures recognition from video sequences using the Quantum-Inspired Immune Cloning Algorithm (QICA). The algorithm included three parts. Firstly, prior knowledge of human beings was used, the key joint points of human could be detected automatically from the human contours and skeletons which could be thinning from the contours; And due to the complexity of human movement, a forecasting mechanism of occlusion joint points was addressed to get optimum 2D key joint points of human body; And then pose estimation recovered by optimizing between the 2D projection of 3D human key joint points and 2D detection key joint points using QICA, which recovered the movement of human body perfectly, because this algorithm could acquire not only the global optimal solution, but the local optimal solution.

  18. Error Evaluation in a Stereovision-Based 3D Reconstruction System

    Directory of Open Access Journals (Sweden)

    Kohler Sophie

    2010-01-01

    Full Text Available The work presented in this paper deals with the performance analysis of the whole 3D reconstruction process of imaged objects, specifically of the set of geometric primitives describing their outline and extracted from a pair of images knowing their associated camera models. The proposed analysis focuses on error estimation for the edge detection process, the starting step for the whole reconstruction procedure. The fitting parameters describing the geometric features composing the workpiece to be evaluated are used as quality measures to determine error bounds and finally to estimate the edge detection errors. These error estimates are then propagated up to the final 3D reconstruction step. The suggested error analysis procedure for stereovision-based reconstruction tasks further allows evaluating the quality of the 3D reconstruction. The resulting final error estimates enable lastly to state if the reconstruction results fulfill a priori defined criteria, for example, fulfill dimensional constraints including tolerance information, for vision-based quality control applications for example.

  19. The 3. industrial revolution according to Jeremy Rifkin: vision or utopia?; La 3. revolution industrielle selon Jeremy Rifkin: vision ou utopie?

    Energy Technology Data Exchange (ETDEWEB)

    Bacher, P. [Academie des Technologies, 75 - Paris (France)

    2008-11-15

    Is the civilization of hydrogen on its way? This is what Jeremy Rifkin claims, who is announcing the 3. industrial revolution, based on electricity produced in an entirely decentralized manner from renewable energy and stored in the form of hydrogen produced by water electrolysis. This article analyses the three main 'pillars' of this industrial revolution and concludes that it is much more a matter of utopia than a 'vision'. (author)

  20. Joint optic disc and cup boundary extraction from monocular fundus images.

    Science.gov (United States)

    Chakravarty, Arunava; Sivaswamy, Jayanthi

    2017-08-01

    Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. A calibration system for measuring 3D ground truth for validation and error analysis of robot vision algorithms

    Science.gov (United States)

    Stolkin, R.; Greig, A.; Gilby, J.

    2006-10-01

    An important task in robot vision is that of determining the position, orientation and trajectory of a moving camera relative to an observed object or scene. Many such visual tracking algorithms have been proposed in the computer vision, artificial intelligence and robotics literature over the past 30 years. However, it is seldom possible to explicitly measure the accuracy of these algorithms, since the ground-truth camera positions and orientations at each frame in a video sequence are not available for comparison with the outputs of the proposed vision systems. A method is presented for generating real visual test data with complete underlying ground truth. The method enables the production of long video sequences, filmed along complicated six-degree-of-freedom trajectories, featuring a variety of objects and scenes, for which complete ground-truth data are known including the camera position and orientation at every image frame, intrinsic camera calibration data, a lens distortion model and models of the viewed objects. This work encounters a fundamental measurement problem—how to evaluate the accuracy of measured ground truth data, which is itself intended for validation of other estimated data. Several approaches for reasoning about these accuracies are described.

  2. Structured Light-Based 3D Reconstruction System for Plants

    Directory of Open Access Journals (Sweden)

    Thuy Tuong Nguyen

    2015-07-01

    Full Text Available Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces and software algorithms (including the proposed 3D point cloud registration and plant feature measurement. This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  3. Structured Light-Based 3D Reconstruction System for Plants.

    Science.gov (United States)

    Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima

    2015-07-29

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  4. Comparative Study of 2D and 3D Optical Imaging Systems: Laparoendoscopic Single-Site Surgery in an Ex Vivo Model.

    Science.gov (United States)

    Vilaça, Jaime; Pinto, José Pedro; Fernandes, Sandra; Costa, Patrício; Pinto, Jorge Correia; Leão, Pedro

    2017-12-01

    Usually laparoscopy is performed by means of a 2-dimensional (2D) image system and multiport approach. To overcome the lack of depth perception, new 3-dimensional (3D) systems are arising with the added advantage of providing stereoscopic vision. To further reduce surgery-related trauma, there are new minimally invasive surgical techniques being developed, such as LESS (laparoendoscopic single-site) surgery. The aim of this study was to compare 2D and 3D laparoscopic systems in LESS surgical procedures. All participants were selected from different levels of experience in laparoscopic surgery-10 novices, 7 intermediates, and 10 experts were included. None of the participants had had previous experience in LESS surgery. Participants were chosen randomly to begin their experience with either the 2D or 3D laparoscopic system. The exercise consisted of performing an ex vivo pork cholecystectomy through a SILS port with the assistance of a fixed distance laparoscope. Errors, time, and participants' preference were recorded. Statistical analysis of time and errors between groups was conducted with a Student's t test (using independent samples) and the Mann-Whitney test. In all 3 groups, the average time with the 2D system was significantly reduced after having used the 3D system ( P 3D system. This study suggests that the 3D system may improve the learning curve and that learning from the 3D system is transferable to the 2D environment. Additionally, the majority of participants prefer 3D equipment.

  5. ERROR DETECTION BY ANTICIPATION FOR VISION-BASED CONTROL

    Directory of Open Access Journals (Sweden)

    A ZAATRI

    2001-06-01

    Full Text Available A vision-based control system has been developed.  It enables a human operator to remotely direct a robot, equipped with a camera, towards targets in 3D space by simply pointing on their images with a pointing device. This paper presents an anticipatory system, which has been designed for improving the safety and the effectiveness of the vision-based commands. It simulates these commands in a virtual environment. It attempts to detect hard contacts that may occur between the robot and its environment, which can be caused by machine errors or operator errors as well.

  6. When the display matters: A multifaceted perspective on 3D geovisualizations

    Directory of Open Access Journals (Sweden)

    Juřík Vojtěch

    2017-04-01

    Full Text Available This study explores the influence of stereoscopic (real 3D and monoscopic (pseudo 3D visualization on the human ability to reckon altitude information in noninteractive and interactive 3D geovisualizations. A two phased experiment was carried out to compare the performance of two groups of participants, one of them using the real 3D and the other one pseudo 3D visualization of geographical data. A homogeneous group of 61 psychology students, inexperienced in processing of geographical data, were tested with respect to their efficiency at identifying altitudes of the displayed landscape. The first phase of the experiment was designed as non-interactive, where static 3D visual displayswere presented; the second phase was designed as interactive and the participants were allowed to explore the scene by adjusting the position of the virtual camera. The investigated variables included accuracy at altitude identification, time demands and the amount of the participant’s motor activity performed during interaction with geovisualization. The interface was created using a Motion Capture system, Wii Remote Controller, widescreen projection and the passive Dolby 3D technology (for real 3D vision. The real 3D visual display was shown to significantly increase the accuracy of the landscape altitude identification in non-interactive tasks. As expected, in the interactive phase there were differences in accuracy flattened out between groups due to the possibility of interaction, with no other statistically significant differences in completion times or motor activity. The increased number of omitted objects in real 3D condition was further subjected to an exploratory analysis.

  7. Global Value Chains from a 3D Printing Perspective

    DEFF Research Database (Denmark)

    Laplume, André O; Petersen, Bent; Pearce, Joshua M.

    2016-01-01

    This article outlines the evolution of additive manufacturing technology, culminating in 3D printing and presents a vision of how this evolution is affecting existing global value chains (GVCs) in production. In particular, we bring up questions about how this new technology can affect...... the geographic span and density of GVCs. Potentially, wider adoption of this technology has the potential to partially reverse the trend towards global specialization of production systems into elements that may be geographically dispersed and closer to the end users (localization). This leaves the question...

  8. Neuropharmacology of vision in goldfish: a review.

    Science.gov (United States)

    Mora-Ferrer, Carlos; Neumeyer, Christa

    2009-05-01

    The goldfish is one of the few animals exceptionally well analyzed in behavioral experiments and also in electrophysiological and neuroanatomical investigations of the retina. To get insight into the functional organization of the retina we studied color vision, motion detection and temporal resolution before and after intra-ocular injection of neuropharmaca with known effects on retinal neurons. Bicuculline, strychnine, curare, atropine, and dopamine D1- and D2-receptor antagonists were used. The results reviewed here indicate separate and parallel processing of L-cone contribution to different visual functions, and the influence of several neurotransmitters (dopamine, acetylcholine, glycine, and GABA) on motion vision, color vision, and temporal resolution.

  9. Computer Vision Using Local Binary Patterns

    CERN Document Server

    Pietikainen, Matti; Zhao, Guoying; Ahonen, Timo

    2011-01-01

    The recent emergence of Local Binary Patterns (LBP) has led to significant progress in applying texture methods to various computer vision problems and applications. The focus of this research has broadened from 2D textures to 3D textures and spatiotemporal (dynamic) textures. Also, where texture was once utilized for applications such as remote sensing, industrial inspection and biomedical image analysis, the introduction of LBP-based approaches have provided outstanding results in problems relating to face and activity analysis, with future scope for face and facial expression recognition, b

  10. 3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging

    Science.gov (United States)

    Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak

    2017-10-01

    Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.

  11. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  12. Tenth International RETRAN Conference Overview: RETRAN's Role in Supporting the Nuclear Industry's Vision

    International Nuclear Information System (INIS)

    Agee, Lance J.

    2003-01-01

    The nuclear industry's current 'vision' for 2020 is for growth in U.S. nuclear to a 23% share of generation in 2020. To support this vision, the Electric Power Research Institute's Nuclear Power Division has developed a strategic bridge plan. The major objectives of the plan are discussed. Of key importance is the U.S. Nuclear Regulatory Commission (NRC) staff's proposed framework for risk-informed regulations. RETRAN-3D will undoubtedly be used by the industry to support Risk-Informed Regulation, specifically option 3.The reason that RETRAN-3D is the most logical tool to support Risk-Informed Regulation is that in January 2001 the NRC issued a safety evaluation report (SER) on RETRAN-3D. The significance of the SER to the RETRAN community is described, and a list of the most important SER conditions provided.Next, the new and unique applications of RETRAN-3D are referenced. Finally, discussion of the future direction of safety software indicates what the author feels is needed to adequately support both existing plant upgrades and future plant designs

  13. [Quality system Vision 2000].

    Science.gov (United States)

    Pasini, Evasio; Pitocchi, Oreste; de Luca, Italo; Ferrari, Roberto

    2002-12-01

    A recent document of the Italian Ministry of Health points out that all structures which provide services to the National Health System should implement a Quality System according to the ISO 9000 standards. Vision 2000 is the new version of the ISO standard. Vision 2000 is less bureaucratic than the old version. The specific requests of the Vision 2000 are: a) to identify, to monitor and to analyze the processes of the structure, b) to measure the results of the processes so as to ensure that they are effective, d) to implement actions necessary to achieve the planned results and the continual improvement of these processes, e) to identify customer requests and to measure customer satisfaction. Specific attention should be also dedicated to the competence and training of the personnel involved in the processes. The principles of the Vision 2000 agree with the principles of total quality management. The present article illustrates the Vision 2000 standard and provides practical examples of the implementation of this standard in cardiological departments.

  14. Visions and visioning in foresight activities

    DEFF Research Database (Denmark)

    Jørgensen, Michael Søgaard; Grosu, Dan

    2007-01-01

    The paper discusses the roles of visioning processes and visions in foresight activities and in societal discourses and changes parallel to or following foresight activities. The overall topic can be characterised as the dynamics and mechanisms that make visions and visioning processes work...... or not work. The theoretical part of the paper presents an actor-network theory approach to the analyses of visions and visioning processes, where the shaping of the visions and the visioning and what has made them work or not work is analysed. The empirical part is based on analyses of the roles of visions...... and visioning processes in a number of foresight processes from different societal contexts. The analyses have been carried out as part of the work in the COST A22 network on foresight. A vision is here understood as a description of a desirable or preferable future, compared to a scenario which is understood...

  15. Understanding Your Vision: The "Imperfect Eye"

    Science.gov (United States)

    ... eye," amblyopia is the most common cause of visual impairment among children. The condition affects about two-to-three out ... the most common cause of monocular (one-eye) visual impairment among children and young and middle-aged adults. Helping kids ...

  16. Semi-automatic registration of 3D orthodontics models from photographs

    Science.gov (United States)

    Destrez, Raphaël.; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin

    2013-03-01

    In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.

  17. The prevalence and causes of decreased visual acuity – a study based on vision screening conducted at Enukweni and Mzuzu Foundation Primary Schools, Malawi

    Directory of Open Access Journals (Sweden)

    Thom L

    2016-12-01

    Full Text Available Leaveson Thom,1 Sanchia Jogessar,1,2 Sara L McGowan,1 Fiona Lawless,1,2 1Department of Optometry, Mzuzu University, Mzuzu, Malawi; 2Brienholden Vision Institute, Durban, South Africa Aim: To determine the prevalence and causes of decreased visual acuity (VA among pupils recruited in two primary schools in Mzimba district, northern region of Malawi.Materials and methods: The study was based on the vision screening which was conducted by optometrists at Enukweni and Mzuzu Foundation Primary Schools. The measurements during the screening included unaided distance monocular VA by using Low Vision Resource Center and Snellen chart, pinhole VA on any subject with VA of less than 6/6, refraction, pupil evaluations, ocular movements, ocular health, and shadow test.Results: The prevalence of decreased VA was found to be low in school-going population (4%, n=594. Even though Enukweni Primary School had few participants than Mzuzu Foundation Primary School, it had high prevalence of decreased VA (5.8%, n=275 than Mzuzu Foundation Primary School (1.8%, n=319. The principal causes of decreased VA in this study were found to be amblyopia and uncorrected refractive errors, with myopia being the main cause than hyperopia.Conclusion: Based on the low prevalence of decreased VA due to myopia or hyperopia, it should not be concluded that refractive errors are an insignificant contributor to visual disability in Malawi. More vision screenings are required at a large scale on school-aged population to reflect the real situation on the ground. Cost-effective strategies are needed to address this easily treatable cause of vision impairment. Keywords: vision screening, refractive errors, visual acuity, Enukweni, Mzuzu foundation

  18. Rapidly 3D Texture Reconstruction Based on Oblique Photography

    Directory of Open Access Journals (Sweden)

    ZHANG Chunsen

    2015-07-01

    Full Text Available This paper proposes a city texture fast reconstruction method based on aerial tilt image for reconstruction of three-dimensional city model. Based on the photogrammetry and computer vision theory and using the city building digital surface model obtained by prior treatment, through collinear equation calculation geometric projection of object and image space, to obtain the three-dimensional information and texture information of the structure and through certain the optimal algorithm selecting the optimal texture on the surface of the object, realize automatic extraction of the building side texture and occlusion handling of the dense building texture. The real image texture reconstruction results show that: the method to the 3D city model texture reconstruction has the characteristics of high degree of automation, vivid effect and low cost and provides a means of effective implementation for rapid and widespread real texture rapid reconstruction of city 3D model.

  19. V-Man Generation for 3-D Real Time Animation. Chapter 5

    Science.gov (United States)

    Nebel, Jean-Christophe; Sibiryakov, Alexander; Ju, Xiangyang

    2007-01-01

    The V-Man project has developed an intuitive authoring and intelligent system to create, animate, control and interact in real-time with a new generation of 3D virtual characters: The V-Men. It combines several innovative algorithms coming from Virtual Reality, Physical Simulation, Computer Vision, Robotics and Artificial Intelligence. Given a high-level task like "walk to that spot" or "get that object", a V-Man generates the complete animation required to accomplish the task. V-Men synthesise motion at runtime according to their environment, their task and their physical parameters, drawing upon its unique set of skills manufactured during the character creation. The key to the system is the automated creation of realistic V-Men, not requiring the expertise of an animator. It is based on real human data captured by 3D static and dynamic body scanners, which is then processed to generate firstly animatable body meshes, secondly 3D garments and finally skinned body meshes.

  20. Contribution to the tracking and the 3D reconstruction of scenes composed of torus from image sequences a acquired by a moving camera; Contribution au suivi et a la reconstruction de scenes constituees d`objet toriques a partir de sequences d`images acquises par une camera mobile

    Energy Technology Data Exchange (ETDEWEB)

    Naudet, S

    1997-01-31

    The three-dimensional perception of the environment is often necessary for a robot to correctly perform its tasks. One solution, based on the dynamic vision, consists in analysing time-varying monocular images to estimate the spatial geometry of the scene. This thesis deals with the reconstruction of torus by dynamic vision. Though this object class is restrictive, it enables to tackle the problem of reconstruction of bent pipes usually encountered in industrial environments. The proposed method is based on the evolution of apparent contours of objects in the sequence. Using the expression of torus limb boundaries, it is possible to recursively estimate the object three-dimensional parameters by minimising the error between the predicted projected contours and the image contours. This process, which is performed by a Kalman filter, does not need a precise knowledge of the camera displacement or any matching of the tow limbs belonging to the same object. To complete this work, temporal tracking of objects which deals with occlusion situations is proposed. The approach consists in modeling and interpreting the apparent motion of objects in the successive images. The motion interpretation, based on a simplified representation of the scene, allows to recover pertinent three-dimensional information which is used to manage occlusion situations. Experiments, on synthetic and real images, proves he validity of the tracking and the reconstruction processes. (author) 127 refs.

  1. Computer vision for shoe upper profile measurement via upper and sole conformal matching

    Science.gov (United States)

    Hu, Zhongxu; Bicker, Robert; Taylor, Paul; Marshall, Chris

    2007-01-01

    This paper describes a structured light computer vision system applied to the measurement of the 3D profile of shoe uppers. The trajectory obtained is used to guide an industrial robot for automatic edge roughing around the contour of the shoe upper so that the bonding strength can be improved. Due to the specific contour and unevenness of the shoe upper, even if the 3D profile is obtained using computer vision, it is still difficult to reliably define the roughing path around the shape. However, the shape of the corresponding shoe sole is better defined, and it is much easier to measure the edge using computer vision. Therefore, a feasible strategy is to measure both the upper and sole profiles, and then align and fit the sole contour to the upper, in order to obtain the best fit. The trajectory of the edge of the desired roughing path is calculated and is then smoothed and interpolated using NURBS curves to guide an industrial robot for shoe upper surface removal; experiments show robust and consistent results. An outline description of the structured light vision system is given here, along with the calibration techniques used.

  2. A wearable mobility device for the blind using retina-inspired dynamic vision sensors.

    Science.gov (United States)

    Ghaderi, Viviane S; Mulas, Marcello; Pereira, Vinicius Felisberto Santos; Everding, Lukas; Weikersdorfer, David; Conradt, Jorg

    2015-01-01

    Proposed is a prototype of a wearable mobility device which aims to assist the blind with navigation and object avoidance via auditory-vision-substitution. The described system uses two dynamic vision sensors and event-based information processing techniques to extract depth information. The 3D visual input is then processed using three different strategies, and converted to a 3D output sound using an individualized head-related transfer function. The performance of the device with different processing strategies is evaluated via initial tests with ten subjects. The outcome of these tests demonstrate promising performance of the system after only very short training times of a few minutes due to the minimal encoding of outputs from the vision sensors which are translated into simple sound patterns easily interpretable for the user. The envisioned system will allow for efficient real-time algorithms on a hands-free and lightweight device with exceptional battery life-time.

  3. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    Science.gov (United States)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  4. Amblyopia and binocular vision.

    Science.gov (United States)

    Birch, Eileen E

    2013-03-01

    Amblyopia is the most common cause of monocular visual loss in children, affecting 1.3%-3.6% of children. Current treatments are effective in reducing the visual acuity deficit but many amblyopic individuals are left with residual visual acuity deficits, ocular motor abnormalities, deficient fine motor skills, and risk for recurrent amblyopia. Using a combination of psychophysical, electrophysiological, imaging, risk factor analysis, and fine motor skill assessment, the primary role of binocular dysfunction in the genesis of amblyopia and the constellation of visual and motor deficits that accompany the visual acuity deficit has been identified. These findings motivated us to evaluate a new, binocular approach to amblyopia treatment with the goals of reducing or eliminating residual and recurrent amblyopia and of improving the deficient ocular motor function and fine motor skills that accompany amblyopia. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Chronic intraventricular administration of lysergic acid diethylamide (LSD) affects the sensitivity of cortical cells to monocular deprivation.

    Science.gov (United States)

    McCall, M A; Tieman, D G; Hirsch, H V

    1982-11-04

    In kittens, but not in adult cats, depriving one eye of pattern vision by suturing the lids shut (monocular deprivation or MD) for one week reduces the proportion of binocular units in the visual cortex. A sensitivity of cortical units in adult cats to MD can be produced by infusing exogenous monoamines into the visual cortex. Since LSD interacts with monoamines, we have examined the effects of chronic administration of LSD on the sensitivity to MD for cortical cells in adult cats. Cats were assigned randomly to one of four conditions: MD/LSD, MD/No-LSD, No-MD/LSD, No-MD/No-LSD. An osmotic minipump delivered either LSD or the vehicle solution alone during a one-week period of MD. The animals showed no obvious anomalies during the administration of the drug. After one week the response properties of single units in area 17 of the visual cortex were studied without knowledge of the contents of the individual minipumps. With the exception of ocular dominance, the response properties of units recorded in all animals did not differ from normal. In the control animals (MD/No-LSD, No-MD/LSD, No-MD/No-LSD) the average proportion of binocular cells was 78%; similar to that observed for normal adult cats. However, in the experimental animals, which received LSD during the period of MD, only 52% of the cells were binocular. Our results suggest that chronic intraventricular administration of LSD affects either directly or indirectly the sensitivity of cortical neurons to MD.

  6. Efficient Measurement of Shape Dissimilarity between 3D Models Using Z-Buffer and Surface Roving Method

    Directory of Open Access Journals (Sweden)

    In Kyu Park

    2002-10-01

    Full Text Available Estimation of the shape dissimilarity between 3D models is a very important problem in both computer vision and graphics for 3D surface reconstruction, modeling, matching, and compression. In this paper, we propose a novel method called surface roving technique to estimate the shape dissimilarity between 3D models. Unlike conventional methods, our surface roving approach exploits a virtual camera and Z-buffer, which is commonly used in 3D graphics. The corresponding points on different 3D models can be easily identified, and also the distance between them is determined efficiently, regardless of the representation types of the 3D models. Moreover, by employing the viewpoint sampling technique, the overall computation can be greatly reduced so that the dissimilarity is obtained rapidly without loss of accuracy. Experimental results show that the proposed algorithm achieves fast and accurate measurement of shape dissimilarity for different types of 3D object models.

  7. Drogue tracking using 3D flash lidar for autonomous aerial refueling

    Science.gov (United States)

    Chen, Chao-I.; Stettner, Roger

    2011-06-01

    Autonomous aerial refueling (AAR) is an important capability for an unmanned aerial vehicle (UAV) to increase its flying range and endurance without increasing its size. This paper presents a novel tracking method that utilizes both 2D intensity and 3D point-cloud data acquired with a 3D Flash LIDAR sensor to establish relative position and orientation between the receiver vehicle and drogue during an aerial refueling process. Unlike classic, vision-based sensors, a 3D Flash LIDAR sensor can provide 3D point-cloud data in real time without motion blur, in the day or night, and is capable of imaging through fog and clouds. The proposed method segments out the drogue through 2D analysis and estimates the center of the drogue from 3D point-cloud data for flight trajectory determination. A level-set front propagation routine is first employed to identify the target of interest and establish its silhouette information. Sufficient domain knowledge, such as the size of the drogue and the expected operable distance, is integrated into our approach to quickly eliminate unlikely target candidates. A statistical analysis along with a random sample consensus (RANSAC) is performed on the target to reduce noise and estimate the center of the drogue after all 3D points on the drogue are identified. The estimated center and drogue silhouette serve as the seed points to efficiently locate the target in the next frame.

  8. Streaming video-based 3D reconstruction method compatible with existing monoscopic and stereoscopic endoscopy systems

    Science.gov (United States)

    Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul

    2012-06-01

    Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.

  9. Trapezius muscle activity increases during near work activity regardless of accommodation/vergence demand level.

    Science.gov (United States)

    Richter, H O; Zetterberg, C; Forsman, M

    2015-07-01

    To investigate if trapezius muscle activity increases over time during visually demanding near work. The vision task consisted of sustained focusing on a contrast-varying black and white Gabor grating. Sixty-six participants with a median age of 38 (range 19-47) fixated the grating from a distance of 65 cm (1.5 D) during four counterbalanced 7-min periods: binocularly through -3.5 D lenses, and monocularly through -3.5 D, 0 D and +3.5 D. Accommodation, heart rate variability and trapezius muscle activity were recorded in parallel. General estimating equation analyses showed that trapezius muscle activity increased significantly over time in all four lens conditions. A concurrent effect of accommodation response on trapezius muscle activity was observed with the minus lenses irrespective of whether incongruence between accommodation and convergence was present or not. Trapezius muscle activity increased significantly over time during the near work task. The increase in muscle activity over time may be caused by an increased need of mental effort and visual attention to maintain performance during the visual tasks to counteract mental fatigue.

  10. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    Directory of Open Access Journals (Sweden)

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  11. A near-vision chart for children aged 3-5 years old:new designs and clinical applications

    Directory of Open Access Journals (Sweden)

    Yang-Qing Huang

    2014-06-01

    Full Text Available AIM:To introduce a new near-vision chart for children aged 3-5 years old and its clinical applications.METHODS:The new near-vision chart which combined the Bailey-Lovie layout with a newly devised set of symmetry symbols was designed based on Weber-Fechner law. It consists of 15 rows of symmetry symbols, corresponding to a visual acuity range from 1.3 to 0.1 logMAR. The optotypes were red against a white background and were specially shaped four basic geometric symbols:circle, square, triangle,and cross, which matched the preschool children''s cognitive level. A regular geometric progression of the optotype sizes and distribution was employed to arrange in 15 lines. The progression rate of the optotype size between two lines was 1.2589 and two smaller groups of optotypes ranging from 0.7 to -0.1 logMAR were included for repetitive testing. A near visual acuity was recorded in logMAR or decimal, and the testing distance was 25 cm.RESULTS:This new near-vision chart with pediatric acuity test optotypes which consists of 4 different symbols (triangle, square, cross, and circle met the national and international eye chart design guidelines. When performing the near visual acuity assessment in preschoolers (3-5 years old. It overcame an inability to recognize the letters of the alphabet and difficulties in designating the direction of black abstract symbols such as the tumbling ''E'' or Landolt ''C'', which the subjects were prone to lose interest in. Near vision may be recorded in different notations:decimal acuity and logMAR. These two notations can be easily converted each other in the new near-vision chart. The measurements of this new chart not only showed a significant correlation and a good consistency with the Chinese national standard logarithmic near-vision chart (r=0.932, P<0.01, but also indicated good test-retest reliability (89% of retest scores were within 0.1 logMAR units of the initial test score and a high response rate

  12. Reconfigurable On-Board Vision Processing for Small Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    James K. Archibald

    2006-12-01

    Full Text Available This paper addresses the challenge of supporting real-time vision processing on-board small autonomous vehicles. Local vision gives increased autonomous capability, but it requires substantial computing power that is difficult to provide given the severe constraints of small size and battery-powered operation. We describe a custom FPGA-based circuit board designed to support research in the development of algorithms for image-directed navigation and control. We show that the FPGA approach supports real-time vision algorithms by describing the implementation of an algorithm to construct a three-dimensional (3D map of the environment surrounding a small mobile robot. We show that FPGAs are well suited for systems that must be flexible and deliver high levels of performance, especially in embedded settings where space and power are significant concerns.

  13. Reconfigurable On-Board Vision Processing for Small Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Fife WadeS

    2007-01-01

    Full Text Available This paper addresses the challenge of supporting real-time vision processing on-board small autonomous vehicles. Local vision gives increased autonomous capability, but it requires substantial computing power that is difficult to provide given the severe constraints of small size and battery-powered operation. We describe a custom FPGA-based circuit board designed to support research in the development of algorithms for image-directed navigation and control. We show that the FPGA approach supports real-time vision algorithms by describing the implementation of an algorithm to construct a three-dimensional (3D map of the environment surrounding a small mobile robot. We show that FPGAs are well suited for systems that must be flexible and deliver high levels of performance, especially in embedded settings where space and power are significant concerns.

  14. Vision based monitoring and characterisation of combustion flames

    International Nuclear Information System (INIS)

    Lu, G; Gilabert, G; Yan, Y

    2005-01-01

    With the advent of digital imaging and image processing techniques vision based monitoring and characterisation of combustion flames have developed rapidly in recent years. This paper presents a short review of the latest developments in this area. The techniques covered in this review are classified into two main categories: two-dimensional (2D) and 3D imaging techniques. Experimental results obtained on both laboratory- and industrial-scale combustion rigs are presented. Future developments in this area also included

  15. The Engelbourg's ruins: from 3D TLS point cloud acquisition to 3D virtual and historic models

    Science.gov (United States)

    Koehl, Mathieu; Berger, Solveig; Nobile, Sylvain

    2014-05-01

    . The 3D model integrated into a GIS is now a precious means of communication for the valuation of the site. Accessible to all, including to the distant people, he allows discover the castle and his history in an educational and relevant way. From an archaeological point of view, the 3D model brings an overall view and a backward movement on the constitution of the site, which a 2D document cannot easily offer. The 3D navigation and the integration of 2D data in the model allow analyze vestiges in another way, contributing to the faster establishment of new hypotheses. Complementary to other methods already exploited in archaeology, the analysis by the 3D vision is, for the scientists, a significant saving of time which they can so dedicate to the more thorough study of certain put aside hypotheses. In parallel, we created several panoramas, and set up a virtual and interactive visit of the site. In the optics to perpetuate this project, and to offer to the future users the ways to continue and to update this study, we tested and set up the methodologies of processing. We were so able to release procedures clear, orderly and applicable as well to the case of Engelbourg as to other similar studies. At least, some hypotheses permits to reconstruct virtually first versions of the original state of the castle.

  16. AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.

    Science.gov (United States)

    Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott

    2014-11-01

    This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.

  17. Three-dimensional (3-D) video systems: bi-channel or single-channel optics?

    Science.gov (United States)

    van Bergen, P; Kunert, W; Buess, G F

    1999-11-01

    This paper presents the results of a comparison between two different three-dimensional (3-D) video systems, one with single-channel optics, the other with bi-channel optics. The latter integrates two lens systems, each transferring one half of the stereoscopic image; the former uses only one lens system, similar to a two-dimensional (2-D) endoscope, which transfers the complete stereoscopic picture. In our training centre for minimally invasive surgery, surgeons were involved in basic and advanced laparoscopic courses using both a 2-D system and the two 3-D video systems. They completed analog scale questionnaires in order to record a subjective impression of the relative convenience of operating in 2-D and 3-D vision, and to identify perceived deficiencies in the 3-D system. As an objective test, different experimental tasks were developed, in order to measure performance times and to count pre-defined errors made while using the two 3-D video systems and the 2-D system. Using the bi-channel optical system, the surgeon has a heightened spatial perception, and can work faster and more safely than with a single-channel system. However, single-channel optics allow the use of an angulated endoscope, and the free rotation of the optics relative to the camera, which is necessary for some operative applications.

  18. A 3-D mixed-reality system for stereoscopic visualization of medical dataset.

    Science.gov (United States)

    Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco

    2009-11-01

    We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice.

  19. Contribution to the tracking and the 3D reconstruction of scenes composed of torus from image sequences a acquired by a moving camera

    International Nuclear Information System (INIS)

    Naudet, S.

    1997-01-01

    The three-dimensional perception of the environment is often necessary for a robot to correctly perform its tasks. One solution, based on the dynamic vision, consists in analysing time-varying monocular images to estimate the spatial geometry of the scene. This thesis deals with the reconstruction of torus by dynamic vision. Though this object class is restrictive, it enables to tackle the problem of reconstruction of bent pipes usually encountered in industrial environments. The proposed method is based on the evolution of apparent contours of objects in the sequence. Using the expression of torus limb boundaries, it is possible to recursively estimate the object three-dimensional parameters by minimising the error between the predicted projected contours and the image contours. This process, which is performed by a Kalman filter, does not need a precise knowledge of the camera displacement or any matching of the tow limbs belonging to the same object. To complete this work, temporal tracking of objects which deals with occlusion situations is proposed. The approach consists in modeling and interpreting the apparent motion of objects in the successive images. The motion interpretation, based on a simplified representation of the scene, allows to recover pertinent three-dimensional information which is used to manage occlusion situations. Experiments, on synthetic and real images, proves he validity of the tracking and the reconstruction processes. (author)

  20. Contextual Multi-Scale Region Convolutional 3D Network for Activity Detection

    KAUST Repository

    Bai, Yancheng

    2018-01-28

    Activity detection is a fundamental problem in computer vision. Detecting activities of different temporal scales is particularly challenging. In this paper, we propose the contextual multi-scale region convolutional 3D network (CMS-RC3D) for activity detection. To deal with the inherent temporal scale variability of activity instances, the temporal feature pyramid is used to represent activities of different temporal scales. On each level of the temporal feature pyramid, an activity proposal detector and an activity classifier are learned to detect activities of specific temporal scales. Temporal contextual information is fused into activity classifiers for better recognition. More importantly, the entire model at all levels can be trained end-to-end. Our CMS-RC3D detector can deal with activities at all temporal scale ranges with only a single pass through the backbone network. We test our detector on two public activity detection benchmarks, THUMOS14 and ActivityNet. Extensive experiments show that the proposed CMS-RC3D detector outperforms state-of-the-art methods on THUMOS14 by a substantial margin and achieves comparable results on ActivityNet despite using a shallow feature extractor.

  1. Contextual Multi-Scale Region Convolutional 3D Network for Activity Detection

    KAUST Repository

    Bai, Yancheng; Xu, Huijuan; Saenko, Kate; Ghanem, Bernard

    2018-01-01

    Activity detection is a fundamental problem in computer vision. Detecting activities of different temporal scales is particularly challenging. In this paper, we propose the contextual multi-scale region convolutional 3D network (CMS-RC3D) for activity detection. To deal with the inherent temporal scale variability of activity instances, the temporal feature pyramid is used to represent activities of different temporal scales. On each level of the temporal feature pyramid, an activity proposal detector and an activity classifier are learned to detect activities of specific temporal scales. Temporal contextual information is fused into activity classifiers for better recognition. More importantly, the entire model at all levels can be trained end-to-end. Our CMS-RC3D detector can deal with activities at all temporal scale ranges with only a single pass through the backbone network. We test our detector on two public activity detection benchmarks, THUMOS14 and ActivityNet. Extensive experiments show that the proposed CMS-RC3D detector outperforms state-of-the-art methods on THUMOS14 by a substantial margin and achieves comparable results on ActivityNet despite using a shallow feature extractor.

  2. 2D and 3D object measurement for control and quality assurance in the industry

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    The subject of this dissertation is object measurement in the industry by use of computer vision. In the first part of the dissertation, the project is defined in an industrial frame. The reader is introduced to Odense Steel Shipyard and its current level of automation. The presentation gives...... an impression of the potential of vision technology in shipbuilding. The next chapter describes different important properties of industrial vision cameras. The presentation is based on practical experience obtained during the Ph.D. project. The geometry that defines the link between the observed world...... and the projected image is the subject of the two next chapters. The first chapter gives a short introduction to projective algebra, which is extremely useful for modelling the image projection and the relation between more images of the same object viewed from different positions. It provides a basis...

  3. A Robust Vision Module for Humanoid Robotic Ping-Pong Game

    Directory of Open Access Journals (Sweden)

    Xiaopeng Chen

    2015-04-01

    Full Text Available Developing a vision module for a humanoid ping-pong game is challenging due to the spin and the non-linear rebound of the ping-pong ball. In this paper, we present a robust predictive vision module to overcome these problems. The hardware of the vision module is composed of two stereo camera pairs with each pair detecting the 3D positions of the ball on one half of the ping-pong table. The software of the vision module divides the trajectory of the ball into four parts and uses the perceived trajectory in the first part to predict the other parts. In particular, the software of the vision module uses an aerodynamic model to predict the trajectories of the ball in the air and uses a novel non-linear rebound model to predict the change of the ball's motion during rebound. The average prediction error of our vision module at the ball returning point is less than 50 mm - a value small enough for standard sized ping-pong rackets. Its average processing speed is 120fps. The precision and efficiency of our vision module enables two humanoid robots to play ping-pong continuously for more than 200 rounds.

  4. Study on portable optical 3D coordinate measuring system

    Science.gov (United States)

    Ren, Tongqun; Zhu, Jigui; Guo, Yinbiao

    2009-05-01

    A portable optical 3D coordinate measuring system based on digital Close Range Photogrammetry (CRP) technology and binocular stereo vision theory is researched. Three ultra-red LED with high stability is set on a hand-hold target to provide measuring feature and establish target coordinate system. Ray intersection based field directional calibrating is done for the intersectant binocular measurement system composed of two cameras by a reference ruler. The hand-hold target controlled by Bluetooth wireless communication is free moved to implement contact measurement. The position of ceramic contact ball is pre-calibrated accurately. The coordinates of target feature points are obtained by binocular stereo vision model from the stereo images pair taken by cameras. Combining radius compensation for contact ball and residual error correction, object point can be resolved by transfer of axes using target coordinate system as intermediary. This system is suitable for on-field large-scale measurement because of its excellent portability, high precision, wide measuring volume, great adaptability and satisfying automatization. It is tested that the measuring precision is near to +/-0.1mm/m.

  5. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    Science.gov (United States)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  6. Photogrammetric 3D reconstruction using mobile imaging

    Science.gov (United States)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  7. Manufacturing Vision Development – Process and Dialogue

    DEFF Research Database (Denmark)

    Dukovska-Popovska, Iskra

    This Ph.D. project has been conducted in the context of PRODUCTION+5 methodology for devel¬oping manufacturing visions for companies, and related to Experimental Laboratory for Production. Both have been established in the Center for Industrial Production. The empirical parts of the research invo...... involve case studies of three companies that are part of the MCD-process. The cases primarily are focusing on the process and the dialogue dur¬ing the manufacturing vision development.......This Ph.D. project has been conducted in the context of PRODUCTION+5 methodology for devel¬oping manufacturing visions for companies, and related to Experimental Laboratory for Production. Both have been established in the Center for Industrial Production. The empirical parts of the research...

  8. OpenCV 3.0 computer vision with Java

    CERN Document Server

    Baggio, Daniel Lélis

    2015-01-01

    If you are a Java developer, student, researcher, or hobbyist wanting to create computer vision applications in Java then this book is for you. If you are an experienced C/C++ developer who is used to working with OpenCV, you will also find this book very useful for migrating your applications to Java. All you need is basic knowledge of Java, with no prior understanding of computer vision required, as this book will give you clear explanations and examples of the basics.

  9. Design of and normative data for a new computer based test of ocular torsion.

    Science.gov (United States)

    Vaswani, Reena S; Mudgil, Ananth V

    2004-01-01

    To evaluate a new clinically practical and dynamic test for quantifying torsional binocular eye alignment changes which may occur in the change from monocular to binocular viewing conditions. The test was developed using a computer with Lotus Freelance Software, binoculars with prisms and colored filters. The subject looks through binoculars at the computer screen two meters away. For monocular vision, six concentric blue circles, a blue horizontal line and a tilted red line were displayed on the screen. For binocular vision, white circles replaced blue circles. The subject was asked to orient the lines parallel to each other. The difference in tilt (degrees) between the subjective parallel and fixed horizontal position is the torsional alignment of the eye. The time to administer the test was approximately two minutes. In 70 Normal subjects, average age 16 years, the mean degree of cyclodeviation tilt in the right eye was 0.6 degrees for monocular viewing conditions and 0.7 degrees for binocular viewing conditions, with a standard deviation of approximately one degree. There was no "statistically significant" difference between monocular and binocular viewing. This computer based test is a simple, computerized, non-invasive test that has a potential for use in the diagnosis of cyclovertical strabismus. Currently, there is no commercially available test for this purpose.

  10. Boosting Economic Growth Through Advanced Machine Vision

    OpenAIRE

    MAAD, Soha; GARBAYA, Samir; AYADI, Nizar; BOUAKAZ, Saida

    2012-01-01

    In this chapter, we overview the potential of machine vision and related technologies in various application domains of critical importance for economic growth and prospect. Considered domains include healthcare, energy and environment, finance, and industrial innovation. Visibility technologies considered encompass augmented and virtual reality, 3D technologies, and media content authoring tools and technologies. We overview the main challenges facing the application domains and discuss the ...

  11. Data fusion for a vision-aided radiological detection system: Calibration algorithm performance

    Science.gov (United States)

    Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas

    2018-05-01

    In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average

  12. First Experiences with Kinect v2 Sensor for Close Range 3d Modelling

    Science.gov (United States)

    Lachat, E.; Macher, H.; Mittet, M.-A.; Landes, T.; Grussenmeyer, P.

    2015-02-01

    RGB-D cameras, also known as range imaging cameras, are a recent generation of sensors. As they are suitable for measuring distances to objects at high frame rate, such sensors are increasingly used for 3D acquisitions, and more generally for applications in robotics or computer vision. This kind of sensors became popular especially since the Kinect v1 (Microsoft) arrived on the market in November 2010. In July 2014, Windows has released a new sensor, the Kinect for Windows v2 sensor, based on another technology as its first device. However, due to its initial development for video games, the quality assessment of this new device for 3D modelling represents a major investigation axis. In this paper first experiences with Kinect v2 sensor are related, and the ability of close range 3D modelling is investigated. For this purpose, error sources on output data as well as a calibration approach are presented.

  13. FIRST EXPERIENCES WITH KINECT V2 SENSOR FOR CLOSE RANGE 3D MODELLING

    Directory of Open Access Journals (Sweden)

    E. Lachat

    2015-02-01

    Full Text Available RGB-D cameras, also known as range imaging cameras, are a recent generation of sensors. As they are suitable for measuring distances to objects at high frame rate, such sensors are increasingly used for 3D acquisitions, and more generally for applications in robotics or computer vision. This kind of sensors became popular especially since the Kinect v1 (Microsoft arrived on the market in November 2010. In July 2014, Windows has released a new sensor, the Kinect for Windows v2 sensor, based on another technology as its first device. However, due to its initial development for video games, the quality assessment of this new device for 3D modelling represents a major investigation axis. In this paper first experiences with Kinect v2 sensor are related, and the ability of close range 3D modelling is investigated. For this purpose, error sources on output data as well as a calibration approach are presented.

  14. Visual performance after the implantation of a new trifocal intraocular lens

    Directory of Open Access Journals (Sweden)

    Vryghem JC

    2013-10-01

    Full Text Available Jérôme C Vryghem,1,2 Steven Heireman1,21Brussels Eye Doctors, Brussels, Belgium; 2Clinique Saint-Jean, Brussels, BelgiumPurpose: To evaluate the subjective and objective visual results after the implantation of a new trifocal diffractive intraocular lens.Methods: A new trifocal diffractive intraocular lens was designed combining two superimposed diffractive profiles: one with +1.75 diopters (D addition for intermediate vision and the other with +3.50 D addition for near vision. Fifty eyes of 25 patients that were operated on by one surgeon are included in this study. The uncorrected and best distance-corrected monocular and binocular, near, intermediate, and distance visual acuities, contrast sensitivity, and defocus curves were measured 6 months postoperatively. In addition to the standard clinical follow-up, a questionnaire evaluating individual satisfaction and quality of life was submitted to the patients.Results: The mean age of patients at the time of surgery was 70 ± 10 years. The mean uncorrected and corrected monocular distance visual acuity (VA were LogMAR 0.06 ± 0.10 and LogMAR 0.00 ± 0.08, respectively. The outcomes for the binocular uncorrected distance visual acuity were almost the same (LogMAR −0.04 ± 0.09. LogMAR −010 ± 0.15 and 0.02 ± 0.06 were measured for the binocular uncorrected intermediate and near VA, respectively. The distance-corrected visual acuity was maintained in mesopic conditions. The contrast sensitivity was similar to that obtained after implantation of a bifocal intraocular lens and did not decrease in mesopic conditions. The binocular defocus curve confirms good VA even in the intermediate distance range, with a moderate decrease of less than LogMAR 0.2 at −1.5 D, with respect to the best distance VA at 0 D defocus. Patient satisfaction was high. No discrepancy between the objective and subjective outcomes was evidenced.Conclusion: The introduction of a third focus in diffractive multifocal

  15. Handheld pose tracking using vision-inertial sensors with occlusion handling

    Science.gov (United States)

    Li, Juan; Slembrouck, Maarten; Deboeverie, Francis; Bernardos, Ana M.; Besada, Juan A.; Veelaert, Peter; Aghajan, Hamid; Casar, José R.; Philips, Wilfried

    2016-07-01

    Tracking of a handheld device's three-dimensional (3-D) position and orientation is fundamental to various application domains, including augmented reality (AR), virtual reality, and interaction in smart spaces. Existing systems still offer limited performance in terms of accuracy, robustness, computational cost, and ease of deployment. We present a low-cost, accurate, and robust system for handheld pose tracking using fused vision and inertial data. The integration of measurements from embedded accelerometers reduces the number of unknown parameters in the six-degree-of-freedom pose calculation. The proposed system requires two light-emitting diode (LED) markers to be attached to the device, which are tracked by external cameras through a robust algorithm against illumination changes. Three data fusion methods have been proposed, including the triangulation-based stereo-vision system, constraint-based stereo-vision system with occlusion handling, and triangulation-based multivision system. Real-time demonstrations of the proposed system applied to AR and 3-D gaming are also included. The accuracy assessment of the proposed system is carried out by comparing with the data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved high accuracy of few centimeters in position estimation and few degrees in orientation estimation.

  16. Vision Based Navigation for Autonomous Cooperative Docking of CubeSats

    Science.gov (United States)

    Pirat, Camille; Ankersen, Finn; Walker, Roger; Gass, Volker

    2018-05-01

    A realistic rendezvous and docking navigation solution applicable to CubeSats is investigated. The scalability analysis of the ESA Autonomous Transfer Vehicle Guidance, Navigation & Control (GNC) performances and the Russian docking system, shows that the docking of two CubeSats would require a lateral control performance of the order of 1 cm. Line of sight constraints and multipath effects affecting Global Navigation Satellite System (GNSS) measurements in close proximity prevent the use of this sensor for the final approach. This consideration and the high control accuracy requirement led to the use of vision sensors for the final 10 m of the rendezvous and docking sequence. A single monocular camera on the chaser satellite and various sets of Light-Emitting Diodes (LEDs) on the target vehicle ensure the observability of the system throughout the approach trajectory. The simple and novel formulation of the measurement equations allows differentiating unambiguously rotations from translations between the target and chaser docking port and allows a navigation performance better than 1 mm at docking. Furthermore, the non-linear measurement equations can be solved in order to provide an analytic navigation solution. This solution can be used to monitor the navigation filter solution and ensure its stability, adding an extra layer of robustness for autonomous rendezvous and docking. The navigation filter initialization is addressed in detail. The proposed method is able to differentiate LEDs signals from Sun reflections as demonstrated by experimental data. The navigation filter uses a comprehensive linearised coupled rotation/translation dynamics, describing the chaser to target docking port motion. The handover, between GNSS and vision sensor measurements, is assessed. The performances of the navigation function along the approach trajectory is discussed.

  17. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    demonstrated 3D imaging based on range-gated imaging. Robot vision is a key technology to remotely monitor structural safety in radiation area of nuclear industry. Especially, visualization technique in low-visibility areas, such as smoking and fog areas, is essential to monitor structural safety in emergency smoking areas. In this paper, a range acquisition technique to discriminate objects is developed. The developed technique to acquire object range images is adapted to a range-gated vision system. Visualization experiments are carried out to detect objects in low-visibility fog environment. The experimental result of this newly approach vision system is described in this paper.

  18. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    International Nuclear Information System (INIS)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    demonstrated 3D imaging based on range-gated imaging. Robot vision is a key technology to remotely monitor structural safety in radiation area of nuclear industry. Especially, visualization technique in low-visibility areas, such as smoking and fog areas, is essential to monitor structural safety in emergency smoking areas. In this paper, a range acquisition technique to discriminate objects is developed. The developed technique to acquire object range images is adapted to a range-gated vision system. Visualization experiments are carried out to detect objects in low-visibility fog environment. The experimental result of this newly approach vision system is described in this paper

  19. Static and dynamic postural control in low-vision and normal-vision adults.

    Science.gov (United States)

    Tomomitsu, Mônica S V; Alonso, Angelica Castilho; Morimoto, Eurica; Bobbio, Tatiana G; Greve, Julia M D

    2013-04-01

    This study aimed to evaluate the influence of reduced visual information on postural control by comparing low-vision and normal-vision adults in static and dynamic conditions. Twenty-five low-vision subjects and twenty-five normal sighted adults were evaluated for static and dynamic balance using four protocols: 1) the Modified Clinical Test of Sensory Interaction on Balance on firm and foam surfaces with eyes opened and closed; 2) Unilateral Stance with eyes opened and closed; 3) Tandem Walk; and 4) Step Up/Over. The results showed that the low-vision group presented greater body sway compared with the normal vision during balance on a foam surface (p≤0.001), the Unilateral Stance test for both limbs (p≤0.001), and the Tandem Walk test. The low-vision group showed greater step width (p≤0.001) and slower gait speed (p≤0.004). In the Step Up/Over task, low-vision participants were more cautious in stepping up (right p≤0.005 and left p≤0.009) and in executing the movement (p≤0.001). These findings suggest that visual feedback is crucial for determining balance, especially for dynamic tasks and on foam surfaces. Low-vision individuals had worse postural stability than normal-vision adults in terms of dynamic tests and balance on foam surfaces.

  20. Magnitude, precision, and realism of depth perception in stereoscopic vision.

    Science.gov (United States)

    Hibbard, Paul B; Haines, Alice E; Hornsey, Rebecca L

    2017-01-01

    Our perception of depth is substantially enhanced by the fact that we have binocular vision. This provides us with more precise and accurate estimates of depth and an improved qualitative appreciation of the three-dimensional (3D) shapes and positions of objects. We assessed the link between these quantitative and qualitative aspects of 3D vision. Specifically, we wished to determine whether the realism of apparent depth from binocular cues is associated with the magnitude or precision of perceived depth and the degree of binocular fusion. We presented participants with stereograms containing randomly positioned circles and measured how the magnitude, realism, and precision of depth perception varied with the size of the disparities presented. We found that as the size of the disparity increased, the magnitude of perceived depth increased, while the precision with which observers could make depth discrimination judgments decreased. Beyond an initial increase, depth realism decreased with increasing disparity magnitude. This decrease occurred well below the disparity limit required to ensure comfortable viewing.

  1. Computer and machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2012-01-01

    Computer and Machine Vision: Theory, Algorithms, Practicalities (previously entitled Machine Vision) clearly and systematically presents the basic methodology of computer and machine vision, covering the essential elements of the theory while emphasizing algorithmic and practical design constraints. This fully revised fourth edition has brought in more of the concepts and applications of computer vision, making it a very comprehensive and up-to-date tutorial text suitable for graduate students, researchers and R&D engineers working in this vibrant subject. Key features include: Practical examples and case studies give the 'ins and outs' of developing real-world vision systems, giving engineers the realities of implementing the principles in practice New chapters containing case studies on surveillance and driver assistance systems give practical methods on these cutting-edge applications in computer vision Necessary mathematics and essential theory are made approachable by careful explanations and well-il...

  2. Design and Assessment of a Machine Vision System for Automatic Vehicle Wheel Alignment

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2013-05-01

    Full Text Available Abstract Wheel alignment, consisting of properly checking the wheel characteristic angles against vehicle manufacturers' specifications, is a crucial task in the automotive field since it prevents irregular tyre wear and affects vehicle handling and safety. In recent years, systems based on Machine Vision have been widely studied in order to automatically detect wheels' characteristic angles. In order to overcome the limitations of existing methodologies, due to measurement equipment being mounted onto the wheels, the present work deals with design and assessment of a 3D machine vision-based system for the contactless reconstruction of vehicle wheel geometry, with particular reference to characteristic planes. Such planes, properly referred to as a global coordinate system, are used for determining wheel angles. The effectiveness of the proposed method was tested against a set of measurements carried out using a commercial 3D scanner; the absolute average error in measuring toe and camber angles with the machine vision system resulted in full compatibility with the expected accuracy of wheel alignment systems.

  3. Creating a vision for the future. Long-term R and D on a short sighted electricity market

    International Nuclear Information System (INIS)

    Markussen, P.; Darsoe, L.

    2005-01-01

    Historically well-established networks among the politicians, power companies, industry, and research institutions have with success dominated innovation in the energy sector. As a consequence of the liberalization in the late 90s, long-term investments have been reduced and because of the increased competition R and D in power companies around Europe is dominated by short-term projects focusing on technological performance and efficiency of the power stations. The energy sector is, however, still faced with long-term problems such as security of supply, environmental responsibilities and economic performance, and these challenges demand new solutions, which, in our opinion, should be obtained through collaboration and co-creation. This calls for social innovation based on new types of relations between research institutions, politicians and the energy supply sector. Thus the goals of this paper are to: 1) suggest a preject phase (Darsoe, 2001), where it is possible and legitimate for the stakeholders to discuss longterm visions that encompass a diversity of technologies, and 2) use scenario techniques as tools for conceptualizing and prototyping this vision. The main question is: How can we create a long-term vision for the Danish energy system that is meaningful to multiple stakeholders? (au)

  4. Creating a vision for the future. Long-term R and D on a short sighted electricity market

    Energy Technology Data Exchange (ETDEWEB)

    Markussen, P. [Elsam (Denmark); Darsoe, L. [Learning Lab. Denmark (Denmark)

    2005-06-01

    Historically well-established networks among the politicians, power companies, industry, and research institutions have with success dominated innovation in the energy sector. As a consequence of the liberalization in the late 90s, long-term investments have been reduced and because of the increased competition R and D in power companies around Europe is dominated by short-term projects focusing on technological performance and efficiency of the power stations. The energy sector is, however, still faced with long-term problems such as security of supply, environmental responsibilities and economic performance, and these challenges demand new solutions, which, in our opinion, should be obtained through collaboration and co-creation. This calls for social innovation based on new types of relations between research institutions, politicians and the energy supply sector. Thus the goals of this paper are to: 1) suggest a preject phase (Darsoe, 2001), where it is possible and legitimate for the stakeholders to discuss longterm visions that encompass a diversity of technologies, and 2) use scenario techniques as tools for conceptualizing and prototyping this vision. The main question is: How can we create a long-term vision for the Danish energy system that is meaningful to multiple stakeholders? (au)

  5. Modelado de sistemas de visión en 2D y 3D: un enfoque hacia el control de robots manipuladores

    Directory of Open Access Journals (Sweden)

    Maximiliano Bueno López

    2013-09-01

    Full Text Available Visual servoing of robot manipulators has been an evolving issue in recent years, especially in applications where the environment is not structured or where access is difficult for operators. To design these controllers, previous simulations are important to adjust parameters or implement a behavioral approach. In this paper we present two different models of vision systems. The models focus on applications in the field of manipulator-robot control. The modeling of video cameras is obtained by using perspective projections. To validate the models, two servo visual controllers in 2D and 3D are simulated.

  6. Automated Vision Test Development and Validation

    Science.gov (United States)

    2016-11-01

    crystal display monitor (NEC Multisync, P232W) at 1920x1080 resolution. Proper calibration was confirmed using a spot photometer/colorimeter (X-Rite i1...visual input to the right and left eye was achieved using liquid crystal display shuttered glasses (NVIDIA 3D Vision 2). The stereo target (Figure 4...threshold on the automated tasks. • Subjects had a lower (better) threshold on color testing for all cone types using the OCCT due to a ceiling

  7. ShipMo3D Version 3.0 User Manual for Computing Ship Motions in the Time and Frequency Domains

    Science.gov (United States)

    2012-01-01

    permettent de modéliser un navire en manœuvre libre et en eau calme ou dans les vagues. SM3DBuildSeaway construit des modèles de voie maritime à trajet...manœuvrant libre - ment en eau calme ou dans une voie maritime modélisée. Plusieurs applications du logiciel ShipMo3D font des prévisions des mouvements de...files have been removed from this document; however, full sample output files are available for the software . DRDC Atlantic TM 2011-308 1 2 New Features

  8. Monocular Visual Deprivation Suppresses Excitability in Adult Human Visual Cortex

    DEFF Research Database (Denmark)

    Lou, Astrid Rosenstand; Madsen, Kristoffer Hougaard; Paulson, Olaf Bjarne

    2011-01-01

    The adult visual cortex maintains a substantial potential for plasticity in response to a change in visual input. For instance, transcranial magnetic stimulation (TMS) studies have shown that binocular deprivation (BD) increases the cortical excitability for inducing phosphenes with TMS. Here, we...... of visual deprivation has a substantial impact on experience-dependent plasticity of the human visual cortex.......The adult visual cortex maintains a substantial potential for plasticity in response to a change in visual input. For instance, transcranial magnetic stimulation (TMS) studies have shown that binocular deprivation (BD) increases the cortical excitability for inducing phosphenes with TMS. Here, we...... employed TMS to trace plastic changes in adult visual cortex before, during, and after 48 h of monocular deprivation (MD) of the right dominant eye. In healthy adult volunteers, MD-induced changes in visual cortex excitability were probed with paired-pulse TMS applied to the left and right occipital cortex...

  9. New approach for measuring 3D space by using Advanced SURF Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Youm, Minkyo; Min, Byungil; Suh, Kyungsuk [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lee, Backgeun [Sungkyunkwan Univ., Suwon (Korea, Republic of)

    2013-05-15

    The nuclear disasters compared to natural disaster create a more extreme condition for analyzing and evaluating. In this paper, measuring 3D space and modeling was studied by simple pictures in case of small sand dune. The suggested method can be used for the acquisition of spatial information by robot at the disaster area. As a result, these data are helpful for identify the damaged part, degree of damage and determination of recovery sequences. In this study we are improving computer vision algorithm for 3-D geo spatial information measurement. And confirm by test. First, we can get noticeable improvement of 3-D geo spatial information result by SURF algorithm and photogrammetry surveying. Second, we can confirm not only decrease algorithm running time, but also increase matching points through epi polar line filtering. From the study, we are extracting 3-D model by open source algorithm and delete miss match point by filtering method. However on characteristic of SURF algorithm, it can't find match point if structure don't have strong feature. So we will need more study about find feature point if structure don't have strong feature.

  10. Vision Assessment and Prescription of Low Vision Devices

    OpenAIRE

    Keeffe, Jill

    2004-01-01

    Assessment of vision and prescription of low vision devices are part of a comprehensive low vision service. Other components of the service include training the person affected by low vision in use of vision and other senses, mobility, activities of daily living, and support for education, employment or leisure activities. Specialist vision rehabilitation agencies have services to provide access to information (libraries) and activity centres for groups of people with impaired vision.

  11. [Comparison of the Pressure on the Larynx and Tongue Using McGRATH® MAC Video Laryngoscope--Direct Vision versus Indirect Vision].

    Science.gov (United States)

    Tanaka, Yasutomo; Miyazaki, Yukiko; Kitakata, Hidenori; Shibuya, Hiromi; Okada, Toshiki

    2015-12-01

    Studies show that McGRATH® MAC (McG) is useful during direct laryngoscopy. However, no study has examined whether McG re- duces pressure on the upper airway tract We compared direct vision with indirect vision concerning pressure on the larynx and tongue. Twenty two anesthesiologists and 16 junior residents attempted direct laryngoscopy of airway management simulator using McG with direct vision and indirect vision. Pressure was measured using pressure measurement film. In anesthesiologists group, pressure on larynx was 14.8 ± 2.7 kgf · cm(-2) with direct vision and 12.7 ± 2.7 kgf · cm(-2) with indirect vision (P vision and 7.6 ± 2.8 kgf · cm(-2) with indirect vision (P = 0.18). In junior residents group, pressure on larynx was 19.0 ± 1.3 kgf · cm(-2) with direct vision and 14.1 ± 3.1 kgf · cm(-2) with indirect vision (P vision and 11.2 ± 4.7 kgf · cm(-2) with indirect vision (P vision can reduce pressure on the upper airway tract.

  12. Progress in computer vision.

    Science.gov (United States)

    Jain, A. K.; Dorai, C.

    Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.

  13. Prism therapy and visual rehabilitation in homonymous visual field loss.

    LENUS (Irish Health Repository)

    O'Neill, Evelyn C

    2011-02-01

    Homonymous visual field defects (HVFD) are common and frequently occur after cerebrovascular accidents. They significantly impair visual function and cause disability particularly with regard to visual exploration. The purpose of this study was to assess a novel interventional treatment of monocular prism therapy on visual functioning in patients with HVFD of varied etiology using vision targeted, health-related quality of life (QOL) questionnaires. Our secondary aim was to confirm monocular and binocular visual field expansion pre- and posttreatment.

  14. Charles Miller Fisher: the 65th anniversary of the publication of his groundbreaking study "Transient Monocular Blindness Associated with Hemiplegia".

    Science.gov (United States)

    Araújo, Tiago Fernando Souza de; Lange, Marcos; Zétola, Viviane H; Massaro, Ayrton; Teive, Hélio A G

    2017-10-01

    Charles Miller Fisher is considered the father of modern vascular neurology and one of the giants of neurology in the 20th century. This historical review emphasizes Prof. Fisher's magnificent contribution to vascular neurology and celebrates the 65th anniversary of the publication of his groundbreaking study, "Transient Monocular Blindness Associated with Hemiplegia."

  15. The role of vision processing in prosthetic vision.

    Science.gov (United States)

    Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette

    2012-01-01

    Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.

  16. Gesture Recognition by Computer Vision : An Integral Approach

    NARCIS (Netherlands)

    Lichtenauer, J.F.

    2009-01-01

    The fundamental objective of this Ph.D. thesis is to gain more insight into what is involved in the practical application of a computer vision system, when the conditions of use cannot be controlled completely. The basic assumption is that research on isolated aspects of computer vision often leads

  17. d-Vision: Seeking Excellence through a Hands on Engineering Multi Discipline Global Internship Program

    Science.gov (United States)

    Suss, Gavin

    2010-01-01

    The question is, "What can vision do?" (Fritz, 1989) rather than "What is vision?" Keter's Chairman, Mr. Sami Sagol's vision is to establish an internship program that will strengthen the competitive edge of the Israeli industry, within the international arena. The program will set new standards of excellence for product…

  18. Refractive lens exchange with a multifocal diffractive aspheric intraocular lens

    Directory of Open Access Journals (Sweden)

    Teresa Ferrer-Blasco

    2012-06-01

    Full Text Available PURPOSE: To evaluate the safety, efficacy and predictability after refractive lens exchange with multifocal diffractive aspheric intraocular lens implantation. METHODS: Sixty eyes of 30 patients underwent bilateral implantation with AcrySof® ReSTOR® SN6AD3 intraocular lens with +4.00 D near addition. Patients were divided into myopic and hyperopic groups. Monocular best corrected visual acuity at distance and near and monocular uncorrected visual acuity at distance and near were measured before and 6 months postoperatively. RESULTS: After surgery, uncorrected visual acuity was 0.08 ± 0.15 and 0.11 ± 0.14 logMAR for the myopic and hyperopic groups, respectively (50% and 46.67% of patients had an uncorrected visual acuity of 20/20 or better in the myopic and hyperopic groups, respectively. The safety and efficacy indexes were 1.05 and 0.88 for the myopic and 1.01 and 0.86 for the hyperopic groups at distance vision. Within the myopic group, 20 eyes remained unchanged after the surgery, and 3 gained >2 lines of best corrected visual acuity. For the hyperopic group, 2 eyes lost 2 lines of best corrected visual acuity, 21 did not change, and 3 eyes gained 2 lines. At near vision, the safety and efficacy indexes were 1.23 and 1.17 for the myopic and 1.16 and 1.13 for the hyperopic groups. Best corrected near visual acuity improved after surgery in both groups (from 0.10 logMAR to 0.01 logMAR in the myopic group, and from 0.10 logMAR to 0.04 logMAR in the hyperopic group. CONCLUSIONS: The ReSTOR® SN6AD3 intraocular lens in refractive lens exchange demonstrated good safety, efficacy, and predictability in correcting high ametropia and presbyopia.

  19. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    Science.gov (United States)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  20. Nonconformity problem in 3D Grid decomposition

    Czech Academy of Sciences Publication Activity Database

    Kolcun, Alexej

    2002-01-01

    Roč. 10, č. 1 (2002), s. 249-253 ISSN 1213-6972. [International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2002/10./. Plzeň, 04.02.2002-08.02.2002] R&D Projects: GA ČR GA105/99/1229; GA ČR GA105/01/1242 Institutional research plan: CEZ:AV0Z3086906 Keywords : structured mesh * decomposition * nonconformity Subject RIV: BA - General Mathematics