WorldWideScience

Sample records for monocular 3d vision

  1. Building a 3D scanner system based on monocular vision.

    Science.gov (United States)

    Zhang, Zhiyi; Yuan, Lin

    2012-04-10

    This paper proposes a three-dimensional scanner system, which is built by using an ingenious geometric construction method based on monocular vision. The system is simple, low cost, and easy to use, and the measurement results are very precise. To build it, one web camera, one handheld linear laser, and one background calibration board are required. The experimental results show that the system is robust and effective, and the scanning precision can be satisfied for normal users.

  2. 3D Reconstruction from a Single Still Image Based on Monocular Vision of an Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available we propose a framework of combining Machine Learning with Dynamic Optimization for reconstructing scene in 3D automatically from a single still image of unstructured outdoor environment based on monocular vision of an uncalibrated camera. After segmenting image first time, a kind of searching tree strategy based on Bayes rule is used to identify the hierarchy of all areas on occlusion. After superpixel segmenting image second time, the AdaBoost algorithm is applied in the integration detection to the depth of lighting, texture and material. Finally, all the factors above are optimized with constrained conditions, acquiring the whole depthmap of an image. Integrate the source image with its depthmap in point-cloud or bilinear interpolation styles, realizing 3D reconstruction. Experiment in comparisons with typical methods in associated database demonstrates our method improves the reasonability of estimation to the overall 3D architecture of image’s scene to a certain extent. And it does not need any manual assist and any camera model information.

  3. Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision

    Institute of Scientific and Technical Information of China (English)

    Wang Xufeng; Kong Xingwei; Zhi Jianhui; Chen Yong; Dong Xinmin

    2015-01-01

    Drogue recognition and 3D locating is a key problem during the docking phase of the autonomous aerial refueling (AAR). To solve this problem, a novel and effective method based on monocular vision is presented in this paper. Firstly, by employing computer vision with red-ring-shape feature, a drogue detection and recognition algorithm is proposed to guarantee safety and ensure the robustness to the drogue diversity and the changes in environmental condi-tions, without using a set of infrared light emitting diodes (LEDs) on the parachute part of the dro-gue. Secondly, considering camera lens distortion, a monocular vision measurement algorithm for drogue 3D locating is designed to ensure the accuracy and real-time performance of the system, with the drogue attitude provided. Finally, experiments are conducted to demonstrate the effective-ness of the proposed method. Experimental results show the performances of the entire system in contrast with other methods, which validates that the proposed method can recognize and locate the drogue three dimensionally, rapidly and precisely.

  4. Toward 3D Reconstruction of Outdoor Scenes Using an MMW Radar and a Monocular Vision Sensor

    Directory of Open Access Journals (Sweden)

    Ghina El Natour

    2015-10-01

    Full Text Available In this paper, we introduce a geometric method for 3D reconstruction of the exterior environment using a panoramic microwave radar and a camera. We rely on the complementarity of these two sensors considering the robustness to the environmental conditions and depth detection ability of the radar, on the one hand, and the high spatial resolution of a vision sensor, on the other. Firstly, geometric modeling of each sensor and of the entire system is presented. Secondly, we address the global calibration problem, which consists of finding the exact transformation between the sensors’ coordinate systems. Two implementation methods are proposed and compared, based on the optimization of a non-linear criterion obtained from a set of radar-to-image target correspondences. Unlike existing methods, no special configuration of the 3D points is required for calibration. This makes the methods flexible and easy to use by a non-expert operator. Finally, we present a very simple, yet robust 3D reconstruction method based on the sensors’ geometry. This method enables one to reconstruct observed features in 3D using one acquisition (static sensor, which is not always met in the state of the art for outdoor scene reconstruction. The proposed methods have been validated with synthetic and real data.

  5. Monocular 3D display system for presenting correct depth

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-10-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  6. Monocular 3D display unit using soft actuator for parallax image shift

    Science.gov (United States)

    Sakamoto, Kunio; Kodama, Yuuki

    2010-11-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth. This vision unit needs an image shift optics for generating monocular parallax images. But conventional image shift mechanism is heavy because of its linear actuator system. To improve this problem, we developed a light-weight 3D vision unit for presenting monocular stereoscopic images using a soft linear actuator made of a polypyrrole film.

  7. Light-weight monocular display unit for 3D display using polypyrrole film actuator

    Science.gov (United States)

    Sakamoto, Kunio; Ohmori, Koji

    2010-10-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth. This vision unit needs an image shift optics for generating monocular parallax images. But conventional image shift mechanism is heavy because of its linear actuator system. To improve this problem, we developed a light-weight 3D vision unit for presenting monocular stereoscopic images using a polypyrrole linear actuator.

  8. Monocular 3D scene reconstruction at absolute scale

    Science.gov (United States)

    Wöhler, Christian; d'Angelo, Pablo; Krüger, Lars; Kuhl, Annika; Groß, Horst-Michael

    In this article we propose a method for combining geometric and real-aperture methods for monocular three-dimensional (3D) reconstruction of static scenes at absolute scale. Our algorithm relies on a sequence of images of the object acquired by a monocular camera of fixed focal setting from different viewpoints. Object features are tracked over a range of distances from the camera with a small depth of field, leading to a varying degree of defocus for each feature. Information on absolute depth is obtained based on a Depth-from-Defocus approach. The parameters of the point spread functions estimated by Depth-from-Defocus are used as a regularisation term for Structure-from-Motion. The reprojection error obtained from bundle adjustment and the absolute depth error obtained from Depth-from-Defocus are simultaneously minimised for all tracked object features. The proposed method yields absolutely scaled 3D coordinates of the scene points without any prior knowledge about scene structure and camera motion. We describe the implementation of the proposed method both as an offline and as an online algorithm. Evaluating the algorithm on real-world data, we demonstrate that it yields typical relative scale errors of a few per cent. We examine the influence of random effects, i.e. the noise of the pixel grey values, and systematic effects, caused by thermal expansion of the optical system or by inclusion of strongly blurred images, on the accuracy of the 3D reconstruction result. Possible applications of our approach are in the field of industrial quality inspection; in particular, it is preferable to stereo cameras in industrial vision systems with space limitations or where strong vibrations occur.

  9. 3D vision system assessment

    Science.gov (United States)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  10. Monocular trajectory intersection method for 3D motion measurement of a point target

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    This article proposes a monocular trajectory intersection method,a videometrics measurement with a mature theoretical system to solve the 3D motion parameters of a point target.It determines the target’s motion parameters including its 3D trajectory and velocity by intersecting the parametric trajectory of a motion target and series of sight-rays by which a motion camera observes the target,in contrast with the regular intersection method for 3D measurement by which the sight-rays intersect at one point.The method offers an approach to overcome the technical failure of traditional monocular measurements for the 3D motion of a point target and thus extends the application fields of photogrammetry and computer vision.Wide application is expected in passive observations of motion targets on various mobile beds.

  11. Monocular trajectory intersection method for 3D motion measurement of a point target

    Institute of Scientific and Technical Information of China (English)

    YU QiFeng; SHANG Yang; ZHOU Jian; ZHANG XiaoHu; LI LiChun

    2009-01-01

    This article proposes a monocular trajectory intersection method,a videometrics measurement with a mature theoretical system to solve the 3D motion parameters of a point target.It determines the target's motion parameters including its 3D trajectory and velocity by intersecting the parametric trajectory of a motion target and series of sight-rays by which a motion camera observes the target,in contrast with the regular intersection method for 3D measurement by which the sight-rays intersect at one point.The method offers an approach to overcome the technical failure of traditional monocular measurements for the 3D motion of a point target and thus extends the application fields of photogrammetry and computer vision.Wide application is expected in passive observations of motion targets on various mobile beds.

  12. Deformable Surface 3D Reconstruction from Monocular Images

    CERN Document Server

    Salzmann, Matthieu

    2010-01-01

    Being able to recover the shape of 3D deformable surfaces from a single video stream would make it possible to field reconstruction systems that run on widely available hardware without requiring specialized devices. However, because many different 3D shapes can have virtually the same projection, such monocular shape recovery is inherently ambiguous. In this survey, we will review the two main classes of techniques that have proved most effective so far: The template-based methods that rely on establishing correspondences with a reference image in which the shape is already known, and non-rig

  13. Outdoor autonomous navigation using monocular vision

    OpenAIRE

    Royer, Eric; Bom, Jonathan; Dhome, Michel; Thuilot, Benoît; Lhuillier, Maxime; Marmoiton, Francois

    2005-01-01

    International audience; In this paper, a complete system for outdoor robot navigation is presented. It uses only monocular vision. The robot is first guided on a path by a human. During this learning step, the robot records a video sequence. From this sequence, a three dimensional map of the trajectory and the environment is built. When this map has been computed, the robot is able to follow the same trajectory by itself. Experimental results carried out with an urban electric vehicle are sho...

  14. Monocular Vision SLAM for Indoor Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Koray Çelik

    2013-01-01

    Full Text Available This paper presents a novel indoor navigation and ranging strategy via monocular camera. By exploiting the architectural orthogonality of the indoor environments, we introduce a new method to estimate range and vehicle states from a monocular camera for vision-based SLAM. The navigation strategy assumes an indoor or indoor-like manmade environment whose layout is previously unknown, GPS-denied, representable via energy based feature points, and straight architectural lines. We experimentally validate the proposed algorithms on a fully self-contained microaerial vehicle (MAV with sophisticated on-board image processing and SLAM capabilities. Building and enabling such a small aerial vehicle to fly in tight corridors is a significant technological challenge, especially in the absence of GPS signals and with limited sensing options. Experimental results show that the system is only limited by the capabilities of the camera and environmental entropy.

  15. Dynamic object recognition and tracking of mobile robot by monocular vision

    Science.gov (United States)

    Liu, Lei; Wang, Yongji

    2007-11-01

    Monocular Vision is widely used in mobile robot's motion control for its simple structure and easy using. An integrated description to distinguish and tracking the specified color targets dynamically and precisely by the Monocular Vision based on the imaging principle is the major topic of the paper. The mainline is accordance with the mechanisms of visual processing strictly, including the pretreatment and recognition processes. Specially, the color models are utilized to decrease the influence of the illumination in the paper. Some applied algorithms based on the practical application are used for image segmentation and clustering. After recognizing the target, however the monocular camera can't get depth information directly, 3D Reconstruction Principle is used to calculate the distance and direction from robot to target. To emend monocular camera reading, the laser is used after vision measuring. At last, a vision servo system is designed to realize the robot's dynamic tracking to the moving target.

  16. 3D environment capture from monocular video and inertial data

    Science.gov (United States)

    Clark, R. Robert; Lin, Michael H.; Taylor, Colin J.

    2006-02-01

    This paper presents experimental methods and results for 3D environment reconstruction from monocular video augmented with inertial data. One application targets sparsely furnished room interiors, using high quality handheld video with a normal field of view, and linear accelerations and angular velocities from an attached inertial measurement unit. A second application targets natural terrain with manmade structures, using heavily compressed aerial video with a narrow field of view, and position and orientation data from the aircraft navigation system. In both applications, the translational and rotational offsets between the camera and inertial reference frames are initially unknown, and only a small fraction of the scene is visible in any one video frame. We start by estimating sparse structure and motion from 2D feature tracks using a Kalman filter and/or repeated, partial bundle adjustments requiring bounded time per video frame. The first application additionally incorporates a weak assumption of bounding perpendicular planes to minimize a tendency of the motion estimation to drift, while the second application requires tight integration of the navigational data to alleviate the poor conditioning caused by the narrow field of view. This is followed by dense structure recovery via graph-cut-based multi-view stereo, meshing, and optional mesh simplification. Finally, input images are texture-mapped onto the 3D surface for rendering. We show sample results from multiple, novel viewpoints.

  17. Deep monocular 3D reconstruction for assisted navigation in bronchoscopy.

    Science.gov (United States)

    Visentini-Scarzanella, Marco; Sugiura, Takamasa; Kaneko, Toshimitsu; Koto, Shinichiro

    2017-07-01

    In bronchoschopy, computer vision systems for navigation assistance are an attractive low-cost solution to guide the endoscopist to target peripheral lesions for biopsy and histological analysis. We propose a decoupled deep learning architecture that projects input frames onto the domain of CT renderings, thus allowing offline training from patient-specific CT data. A fully convolutional network architecture is implemented on GPU and tested on a phantom dataset involving 32 video sequences and [Formula: see text]60k frames with aligned ground truth and renderings, which is made available as the first public dataset for bronchoscopy navigation. An average estimated depth accuracy of 1.5 mm was obtained, outperforming conventional direct depth estimation from input frames by 60%, and with a computational time of [Formula: see text]30 ms on modern GPUs. Qualitatively, the estimated depth and renderings closely resemble the ground truth. The proposed method shows a novel architecture to perform real-time monocular depth estimation without losing patient specificity in bronchoscopy. Future work will include integration within SLAM systems and collection of in vivo datasets.

  18. Monocular accommodation condition in 3D display types through geometrical optics

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Dong-Wook; Park, Min-Chul; Son, Jung-Young

    2007-09-01

    Eye fatigue or strain phenomenon in 3D display environment is a significant problem for 3D display commercialization. The 3D display systems like eyeglasses type stereoscopic or auto-stereoscopic multiview, Super Multi-View (SMV), and Multi-Focus (MF) displays are considered for detail calculation about satisfaction level of monocular accommodation by geometrical optics calculation means. A lens with fixed focal length is used for experimental verification about numerical calculation of monocular defocus effect caused by accommodation at three different depths. And the simulation and experiment results consistently show relatively high level satisfaction about monocular accommodation at MF display condition. Additionally, possibility of monocular depth perception, 3D effect, at monocular MF display is discussed.

  19. Development of monocular and binocular multi-focus 3D display systems using LEDs

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Dong-Wook; Son, Jung-Young; Kwon, Yong-Moo

    2008-04-01

    Multi-focus 3D display systems are developed and a possibility about satisfaction of eye accommodation is tested. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving the multi-focus function, we developed 3D display systems for one eye and both eyes, which can satisfy accommodation to displayed virtual objects within defined depth. The monocular accommodation and the binocular convergence 3D effect of the system are tested and a proof of the satisfaction of the accommodation and experimental result of the binocular 3D fusion are given as results by using the proposed 3D display systems.

  20. Monocular display unit for 3D display with correct depth perception

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  1. Depth measurement using monocular stereo vision system: aspect of spatial discretization

    Science.gov (United States)

    Xu, Zheng; Li, Chengjin; Zhao, Xunjie; Chen, Jiabo

    2010-11-01

    The monocular stereo vision system, consisting of single camera with controllable focal length, can be used in 3D reconstruction. Applying the system for 3D reconstruction, must consider effects caused by digital camera. There are two possible methods to make the monocular stereo vision system. First one the distance between the target object and the camera image plane is constant and lens moves. The second method assumes that the lens position is constant and the image plane moves in respect to the target. In this paper mathematical modeling of two approaches is presented. We focus on iso-disparity surfaces to define the discretization effect on the reconstructed space. These models are implemented and simulated on Matlab. The analysis is used to define application constrains and limitations of these methods. The results can be also used to enhance the accuracy of depth measurement.

  2. The Influence of Monocular Spatial Cues on Vergence Eye Movements in Monocular and Binocular Viewing of 3-D and 2-D Stimuli.

    Science.gov (United States)

    Batvinionak, Anton A; Gracheva, Maria A; Bolshakov, Andrey S; Rozhkova, Galina I

    2015-01-01

    The influence of monocular spatial cues on the vergence eye movements was studied in two series of experiments: (I) the subjects were viewing a 3-D video and also its 2-D version-binocularly and monocularly; and (II) in binocular and monocular viewing conditions, the subjects were presented with stationary 2-D stimuli containing or not containing some monocular indications of spatial arrangement. The results of the series (I) showed that, in binocular viewing conditions, the vergence eye movements were only present in the case of 3-D but not 2-D video, while in the course of monocular viewing of 2-D video, some regular vergence eye movements could be revealed, suggesting that the occluded eye position could be influenced by the spatial organization of the scene reconstructed on the basis of the monocular depth information provided by the viewing eye. The data obtained in series (II), in general, seem to support this hypothesis. © The Author(s) 2015.

  3. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    Science.gov (United States)

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.

  4. Stochastically optimized monocular vision-based navigation and guidance

    Science.gov (United States)

    Watanabe, Yoko

    The objective of this thesis is to design a relative navigation and guidance law for unmanned aerial vehicles, or UAVs, for vision-based control applications. The autonomous operation of UAVs has progressively developed in recent years. In particular, vision-based navigation, guidance and control has been one of the most focused on research topics for the automation of UAVs. This is because in nature, birds and insects use vision as the exclusive sensor for object detection and navigation. Furthermore, it is efficient to use a vision sensor since it is compact, light-weight and low cost. Therefore, this thesis studies the monocular vision-based navigation and guidance of UAVs. Since 2-D vision-based measurements are nonlinear with respect to the 3-D relative states, an extended Kalman filter (EKF) is applied in the navigation system design. The EKF-based navigation system is integrated with a real-time image processing algorithm and is tested in simulations and flight tests. The first closed-loop vision-based formation flight between two UAVs has been achieved, and the results are shown in this thesis to verify the estimation performance of the EKF. In addition, vision-based 3-D terrain recovery was performed in simulations to present a navigation design which has the capability of estimating states of multiple objects. In this problem, the statistical z-test is applied to solve the correspondence problem of relating measurements and estimation states. As a practical example of vision-based control applications for UAVs, a vision-based obstacle avoidance problem is specially addressed in this thesis. A navigation and guidance system is designed for a UAV to achieve a mission of waypoint tracking while avoiding unforeseen stationary obstacles by using vision information. An EKF is applied to estimate each obstacles' position from the vision-based information. A collision criteria is established by using a collision-cone approach and a time-to-go criterion. A minimum

  5. A smart telerobotic system driven by monocular vision

    Science.gov (United States)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  6. Mobile Robot Hierarchical Simultaneous Localization and Mapping Using Monocular Vision

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A hierarchical mobile robot simultaneous localization and mapping (SLAM) method that allows us to obtain accurate maps was presented. The local map level is composed of a set of local metric feature maps that are guaranteed to be statistically independent. The global level is a topological graph whose arcs are labeled with the relative location between local maps. An estimation of these relative locations is maintained with local map alignment algorithm, and more accurate estimation is calculated through a global minimization procedure using the loop closure constraint. The local map is built with Rao-Blackwellised particle filter (RBPF), where the particle filter is used to extending the path posterior by sampling new poses. The landmark position estimation and update is implemented through extended Kalman filter (EKF). Monocular vision mounted on the robot tracks the 3D natural point landmarks, which are structured with matching scale invariant feature transform (SIFT) feature pairs. The matching for multi-dimension SIFT features is implemented with a KD-tree in the time cost of O(lbN). Experiment results on Pioneer mobile robot in a real indoor environment show the superior performance of our proposed method.

  7. Vision models for 3D surfaces

    Science.gov (United States)

    Mitra, Sunanda

    1992-11-01

    Different approaches to computational stereo to represent human stereo vision have been developed over the past two decades. The Marr-Poggio theory of human stereo vision is probably the most widely accepted model of the human stereo vision. However, recently developed motion stereo models which use a sequence of images taken by either a moving camera or a moving object provide an alternative method of achieving multi-resolution matching without the use of Laplacian of Gaussian operators. While using image sequences, the baseline between two camera positions for a image pair is changed for the subsequent image pair so as to achieve different resolution for each image pair. Having different baselines also avoids the inherent occlusion problem in stereo vision models. The advantage of using multi-resolution images acquired by camera positioned at different baselines over those acquired by LOG operators is that one does not have to encounter spurious edges often created by zero-crossings in the LOG operated images. Therefore in designing a computer vision system, a motion stereo model is more appropriate than a stereo vision model. However, in some applications where only a stereo pair of images are available, recovery of 3D surfaces of natural scenes are possible in a computationally efficient manner by using cepstrum matching and regularization techniques. Section 2 of this paper describes a motion stereo model using multi-scale cepstrum matching for the detection of disparity between image pairs in a sequence of images and subsequent recovery of 3D surfaces from depth-map obtained by a non convergent triangulation technique. Section 3 presents a 3D surface recovery technique from a stereo pair using cepstrum matching for disparity detection and cubic B-splines for surface smoothing. Section 4 contains the results of 3D surface recovery using both of the techniques mentioned above. Section 5 discusses the merit of 2D cepstrum matching and cubic B

  8. Disseminated neurocysticercosis presenting as isolated acute monocular painless vision loss

    Directory of Open Access Journals (Sweden)

    Gaurav M Kasundra

    2014-01-01

    Full Text Available Neurocysticercosis, the most common parasitic infection of the nervous system, is known to affect the brain, eyes, muscular tissues and subcutaneous tissues. However, it is very rare for patients with ocular cysts to have concomitant cerebral cysts. Also, the dominant clinical manifestation of patients with cerebral cysts is either seizures or headache. We report a patient who presented with acute monocular painless vision loss due to intraocular submacular cysticercosis, who on investigation had multiple cerebral parenchymal cysticercal cysts, but never had any seizures. Although such a vision loss after initiation of antiparasitic treatment has been mentioned previously, acute monocular vision loss as the presenting feature of ocular cysticercosis is rare. We present a brief review of literature along with this case report.

  9. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    Science.gov (United States)

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  10. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  11. Monocular Vision: Occupational Limitations and Current Standards

    Science.gov (United States)

    2011-03-01

    Kumagai, J. K., Williams, S., and Kline, D. (2005), Vision standards for aircrew: Visual acuity for pilots, (DRDC-TORONTO-CR-2005-142), Greenley ...Canadian Forces aircrew, (DRDC-TORONTO-CR-2006-255), Greenley and Associate Inc., Ottawa. Lövsund, P., Hedin, A., and Törnros, J. (1991), Effects...Williams, S., Casson, E., Brooks, J., Greenley , M., and Nadeau, J. (2003), Visual acuity standard for divers, Greenley & Associates Incorporated

  12. A Monocular Vision Based Approach to Flocking

    Science.gov (United States)

    2006-03-01

    The bird represented with the green triangle desires to move away from its neighbors to avoid overcrowding . The bird reacts the most strongly to the... brightness gradients [35], neural networks [18, 19], and other vision-based methods [6, 26, 33]. For the purposes of this thesis effort, it is assumed that...Once started, however, maneuver waves spread through the flock at a mean speed of less than 15 milliseconds [43]. 2.5.3 Too Perfect. In nature, a bird

  13. Stereo improves 3D shape discrimination even when rich monocular shape cues are available.

    Science.gov (United States)

    Lee, Young Lim; Saunders, Jeffrey A

    2011-08-17

    We measured the ability to discriminate 3D shapes across changes in viewpoint and illumination based on rich monocular 3D information and tested whether the addition of stereo information improves shape constancy. Stimuli were images of smoothly curved, random 3D objects. Objects were presented in three viewing conditions that provided different 3D information: shading-only, stereo-only, and combined shading and stereo. Observers performed shape discrimination judgments for sequentially presented objects that differed in orientation by rotation of 0°-60° in depth. We found that rotation in depth markedly impaired discrimination performance in all viewing conditions, as evidenced by reduced sensitivity (d') and increased bias toward judging same shapes as different. We also observed a consistent benefit from stereo, both in conditions with and without change in viewpoint. Results were similar for objects with purely Lambertian reflectance and shiny objects with a large specular component. Our results demonstrate that shape perception for random 3D objects is highly viewpoint-dependent and that stereo improves shape discrimination even when rich monocular shape cues are available.

  14. Aerial vehicles collision avoidance using monocular vision

    Science.gov (United States)

    Balashov, Oleg; Muraviev, Vadim; Strotov, Valery

    2016-10-01

    In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.

  15. Automatic gear sorting system based on monocular vision

    Directory of Open Access Journals (Sweden)

    Wenqi Wu

    2015-11-01

    Full Text Available An automatic gear sorting system based on monocular vision is proposed in this paper. A CCD camera fixed on the top of the sorting system is used to obtain the images of the gears on the conveyor belt. The gears׳ features including number of holes, number of teeth and color are extracted, which is used to categorize the gears. Photoelectric sensors are used to locate the gears׳ position and produce the trigger signals for pneumatic cylinders. The automatic gear sorting is achieved by using pneumatic actuators to push different gears into their corresponding storage boxes. The experimental results verify the validity and reliability of the proposed method and system.

  16. Estimating 3D positions and velocities of projectiles from monocular views.

    Science.gov (United States)

    Ribnick, Evan; Atev, Stefan; Papanikolopoulos, Nikolaos P

    2009-05-01

    In this paper, we consider the problem of localizing a projectile in 3D based on its apparent motion in a stationary monocular view. A thorough theoretical analysis is developed, from which we establish the minimum conditions for the existence of a unique solution. The theoretical results obtained have important implications for applications involving projectile motion. A robust, nonlinear optimization-based formulation is proposed, and the use of a local optimization method is justified by detailed examination of the local convexity structure of the cost function. The potential of this approach is validated by experimental results.

  17. Monocular 3D Reconstruction and Augmentation of Elastic Surfaces with Self-Occlusion Handling.

    Science.gov (United States)

    Haouchine, Nazim; Dequidt, Jeremie; Berger, Marie-Odile; Cotin, Stephane

    2015-12-01

    This paper focuses on the 3D shape recovery and augmented reality on elastic objects with self-occlusions handling, using only single view images. Shape recovery from a monocular video sequence is an underconstrained problem and many approaches have been proposed to enforce constraints and resolve the ambiguities. State-of-the art solutions enforce smoothness or geometric constraints, consider specific deformation properties such as inextensibility or resort to shading constraints. However, few of them can handle properly large elastic deformations. We propose in this paper a real-time method that uses a mechanical model and able to handle highly elastic objects. The problem is formulated as an energy minimization problem accounting for a non-linear elastic model constrained by external image points acquired from a monocular camera. This method prevents us from formulating restrictive assumptions and specific constraint terms in the minimization. In addition, we propose to handle self-occluded regions thanks to the ability of mechanical models to provide appropriate predictions of the shape. Our method is compared to existing techniques with experiments conducted on computer-generated and real data that show the effectiveness of recovering and augmenting 3D elastic objects. Additionally, experiments in the context of minimally invasive liver surgery are also provided and results on deformations with the presence of self-occlusions are exposed.

  18. Handbook of 3D machine vision optical metrology and imaging

    CERN Document Server

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  19. 3D Vision in a Virtual Reality Robotics Environment

    OpenAIRE

    Schütz, Christian L.; Natonek, Emerico; Baur, Charles; Hügli, Heinz

    2009-01-01

    Virtual reality robotics (VRR) needs sensing feedback from the real environment. To show how advanced 3D vision provides new perspectives to fulfill these needs, this paper presents an architecture and system that integrates hybrid 3D vision and VRR and reports about experiments and results. The first section discusses the advantages of virtual reality in robotics, the potential of a 3D vision system in VRR and the contribution of a knowledge database, robust control and the combination of in...

  20. Development of a monocular vision system for robotic drilling

    Institute of Scientific and Technical Information of China (English)

    Wei-dong ZHU; Biao MEI; Guo-rui YAN; Ying-lin KE

    2014-01-01

    Robotic drilling for aerospace structures demands a high positioning accuracy of the robot, which is usually achieved through error measurement and compensation. In this paper, we report the development of a practical monocular vision system for measurement of the relative error between the drill tool center point (TCP) and the reference hole. First, the principle of relative error measurement with the vision system is explained, followed by a detailed discussion on the hardware components, software components, and system integration. The elliptical contour extraction algorithm is presented for accurate and robust reference hole detection. System calibration is of key importance to the measurement accuracy of a vision system. A new method is proposed for the simultaneous calibration of camera internal parameters and hand-eye relationship with a dedicated calibration board. Extensive measurement experiments have been performed on a robotic drilling system. Experimental results show that the measurement accuracy of the developed vision system is higher than 0.15 mm, which meets the requirement of robotic drilling for aircraft structures.

  1. Novel approach for mobile robot localization using monocular vision

    Science.gov (United States)

    Zhong, Zhiguang; Yi, Jianqiang; Zhao, Dongbin; Hong, Yiping

    2003-09-01

    This paper presents a novel approach for mobile robot localization using monocular vision. The proposed approach locates a robot relative to the target to which the robot moves. Two points are selected from the target as two feature points. Once the coordinates in an image of the two feature points are detected, the position and motion direction of the robot can be determined according to the detected coordinates. Unlike those reported geometry pose estimation or landmarks matching methods, this approach requires neither artificial landmarks nor an accurate map of indoor environment. It needs less computation and can simplify greatly the localization problem. The validity and flexibility of the proposed approach is demonstrated by experiments performed on real images. The results show that this new approach is not only simple and flexible but also has high localization precision.

  2. Stereoscopic 3D-scene synthesis from a monocular camera with an electrically tunable lens

    Science.gov (United States)

    Alonso, Julia R.

    2016-09-01

    3D-scene acquisition and representation is important in many areas ranging from medical imaging to visual entertainment application. In this regard, optical imaging acquisition combined with post-capture processing algorithms enable the synthesis of images with novel viewpoints of a scene. This work presents a new method to reconstruct a pair of stereoscopic images of a 3D-scene from a multi-focus image stack. A conventional monocular camera combined with an electrically tunable lens (ETL) is used for image acquisition. The captured visual information is reorganized considering a piecewise-planar image formation model with a depth-variant point spread function (PSF) along with the known focusing distances at which the images of the stack were acquired. The consideration of a depth-variant PSF allows the application of the method to strongly defocused multi-focus image stacks. Finally, post-capture perspective shifts, presenting each eye the corresponding viewpoint according to the disparity, are generated by simulating the displacement of a synthetic pinhole camera. The procedure is performed without estimation of the depth-map or segmentation of the in-focus regions. Experimental results for both real and synthetic data images are provided and presented as anaglyphs, but it could easily be implemented in 3D displays based in parallax barrier or polarized light.

  3. How the Venetian Blind Percept Emergesfrom the Laminar Cortical Dynamics of 3D Vision

    Directory of Open Access Journals (Sweden)

    Stephen eGrossberg

    2014-08-01

    Full Text Available The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model shows how identified neurons that interact in hierarchically organized laminar circuits of the visual cortex can simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. The model describes how monocular and binocular oriented filtering interacts with later stages of 3D boundary formation and surface filling-in in the lateral geniculate nucleus (LGN and cortical areas V1, V2, and V4. It proposes how interactions between layers 4, 3B, and 2/3 in V1 and V2 contribute to stereopsis, and how binocular and monocular information combine to form 3D boundary and surface representations. The model suggests how surface-to-boundary feedback from V2 thin stripes to pale stripes enables computationally complementary boundary and surface formation properties to generate a single consistent percept, eliminate redundant 3D boundaries, and trigger figure-ground perception. The model also shows how false binocular boundary matches may be eliminated by Gestalt grouping properties. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity

  4. The New Realm of 3-D Vision

    Science.gov (United States)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  5. MONOCULAR AND BINOCULAR VISION IN THE PERFORMANCE OF A COMPLEX SKILL

    Directory of Open Access Journals (Sweden)

    Thomas Heinen

    2011-09-01

    Full Text Available The goal of this study was to investigate the role of binocular and monocular vision in 16 gymnasts as they perform a handspring on vault. In particular we reasoned, if binocular visual information is eliminated while experts and apprentices perform a handspring on vault, and their performance level changes or is maintained, then such information must or must not be necessary for their best performance. If the elimination of binocular vision leads to differences in gaze behavior in either experts or apprentices, this would answer the question of an adaptive gaze behavior, and thus if this is a function of expertise level or not. Gaze behavior was measured using a portable and wireless eye-tracking system in combination with a movement-analysis system. Results revealed that gaze behavior differed between experts and apprentices in the binocular and monocular conditions. In particular, apprentices showed less fixations of longer duration in the monocular condition as compared to experts and the binocular condition. Apprentices showed longer blink duration than experts in both, the monocular and binocular conditions. Eliminating binocular vision led to a shorter repulsion phase and a longer second flight phase in apprentices. Experts exhibited no differences in phase durations between binocular and monocular conditions. Findings suggest, that experts may not rely on binocular vision when performing handsprings, and movement performance maybe influenced in apprentices when eliminating binocular vision. We conclude that knowledge about gaze-movement relationships may be beneficial for coaches when teaching the handspring on vault in gymnastics

  6. Automatic Plant Annotation Using 3D Computer Vision

    DEFF Research Database (Denmark)

    Nielsen, Michael

    in active shape modeling of weeds for weed detection. Occlusion and overlapping leaves were main problems for this kind of work. Using 3D computer vision it was possible to separate overlapping crop leaves from weed leaves using the 3D information from the disparity maps. The results of the 3D......In this thesis 3D reconstruction was investigated for application in precision agriculture where previous work focused on low resolution index maps where each pixel represents an area in the field and the index represents an overall crop status in that area. 3D reconstructions of plants would allow...... for more detailed descriptions of the state of the crops analogous to the way humans evaluate crop health, i.e. by looking at the canopy structure and check for discolorations at specific locations on the plants. Previous research in 3D reconstruction methods based on cameras has focused on rigid...

  7. Research progress of depth detection in vision measurement: a novel project of bifocal imaging system for 3D measurement

    Science.gov (United States)

    Li, Anhu; Ding, Ye; Wang, Wei; Zhu, Yongjian; Li, Zhizhong

    2013-09-01

    The paper reviews the recent research progresses of vision measurement. The general methods of the depth detection used in the monocular stereo vision are compared with each other. As a result, a novel bifocal imaging measurement system based on the zoom method is proposed to solve the problem of the online 3D measurement. This system consists of a primary lens and a secondary one with the different focal length matching to meet the large-range and high-resolution imaging requirements without time delay and imaging errors, which has an important significance for the industry application.

  8. Monocular Vision-Based Robot Localization and Target Tracking

    Directory of Open Access Journals (Sweden)

    Bing-Fei Wu

    2011-01-01

    Full Text Available This paper presents a vision-based technology for localizing targets in 3D environment. It is achieved by the combination of different types of sensors including optical wheel encoders, an electrical compass, and visual observations with a single camera. Based on the robot motion model and image sequences, extended Kalman filter is applied to estimate target locations and the robot pose simultaneously. The proposed localization system is applicable in practice because it is not necessary to have the initializing setting regarding starting the system from artificial landmarks of known size. The technique is especially suitable for navigation and target tracing for an indoor robot and has a high potential extension to surveillance and monitoring for Unmanned Aerial Vehicles with aerial odometry sensors. The experimental results present “cm” level accuracy of the localization of the targets in indoor environment under a high-speed robot movement.

  9. 3D vision assisted flexible robotic assembly of machine components

    Science.gov (United States)

    Ogun, Philips S.; Usman, Zahid; Dharmaraj, Karthick; Jackson, Michael R.

    2015-12-01

    Robotic assembly systems either make use of expensive fixtures to hold components in predefined locations, or the poses of the components are determined using various machine vision techniques. Vision-guided assembly robots can handle subtle variations in geometries and poses of parts. Therefore, they provide greater flexibility than the use of fixtures. However, the currently established vision-guided assembly systems use 2D vision, which is limited to three degrees of freedom. The work reported in this paper is focused on flexible automated assembly of clearance fit machine components using 3D vision. The recognition and the estimation of the poses of the components are achieved by matching their CAD models with the acquired point cloud data of the scene. Experimental results obtained from a robot demonstrating the assembly of a set of rings on a shaft show that the developed system is not only reliable and accurate, but also fast enough for industrial deployment.

  10. ACCURACY OF A 3D VISION SYSTEM FOR INSPECTION

    DEFF Research Database (Denmark)

    Carmignato, Simone; Savio, Enrico; De Chiffre, Leonardo

    2003-01-01

    ABSTRACT. This paper illustrates an experimental method to assess the accuracy of a three-dimensional (3D) vision system for the inspection of complex geometry. The aim is to provide a procedure to evaluate task related measurement uncertainty for virtually any measurement task. The key element...

  11. Glass Vision 3D: Digital Discovery for the Deaf

    Science.gov (United States)

    Parton, Becky Sue

    2017-01-01

    Glass Vision 3D was a grant-funded project focused on developing and researching a Google Glass app that would allowed young Deaf children to look at the QR code of an object in the classroom and see an augmented reality projection that displays an American Sign Language (ASL) related video. Twenty five objects and videos were prepared and tested…

  12. Binocular and Monocular Depth Cues in Online Feedback Control of 3-D Pointing Movement

    Science.gov (United States)

    Hu, Bo; Knill, David C.

    2012-01-01

    Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and thus were available in an observer’s retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account of the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions. PMID:21724567

  13. Binocular and monocular depth cues in online feedback control of 3D pointing movement.

    Science.gov (United States)

    Hu, Bo; Knill, David C

    2011-06-30

    Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and, thus, were available in an observer's retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size, and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions.

  14. Enhanced operator perception through 3D vision and haptic feedback

    Science.gov (United States)

    Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren

    2012-06-01

    Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.

  15. How the venetian blind percept emerges from the laminar cortical dynamics of 3D vision.

    Science.gov (United States)

    Cao, Yongqiang; Grossberg, Stephen

    2014-01-01

    The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and to show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model proposes how lateral geniculate nucleus (LGN) and hierarchically organized laminar circuits in cortical areas V1, V2, and V4 interact to control processes of 3D boundary formation and surface filling-in that simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. Interactions between layers 4, 3B, and 2/3 in V1 and V2 carry out stereopsis and 3D boundary formation. Both binocular and monocular information combine to form 3D boundary and surface representations. Surface contour surface-to-boundary feedback from V2 thin stripes to V2 pale stripes combines computationally complementary boundary and surface formation properties, leading to a single consistent percept, while also eliminating redundant 3D boundaries, and triggering figure-ground perception. False binocular boundary matches are eliminated by Gestalt grouping properties during boundary formation. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity-reversed stereograms, da Vinci stereopsis, and perceptual closure. These model mechanisms have also simulated properties of 3D neon

  16. 3D vision upgrade kit for TALON robot

    Science.gov (United States)

    Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad

    2010-04-01

    In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  17. 3D vision system for intelligent milking robot automation

    Science.gov (United States)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  18. Monocular vision based navigation method of mobile robot

    Institute of Scientific and Technical Information of China (English)

    DONG Ji-wen; YANG Sen; LU Shou-yin

    2009-01-01

    A trajectory tracking method is presented for the visual navigation of the monocular mobile robot. The robot move along line trajectory drawn beforehand, recognized and stop on the stop-sign to finish special task. The robot uses a forward looking colorful digital camera to capture information in front of the robot, and by the use of HSI model partition the trajectory and the stop-sign out. Then the "sampling estimate" method was used to calculate the navigation parameters. The stop-sign is easily recognized and can identify 256 different signs. Tests indicate that the method can fit large-scale intensity of brightness and has more robustness and better real-time character.

  19. A 3D Human Skeletonization Algorithm for a Single Monocular Camera Based on Spatial–Temporal Discrete Shadow Integration

    Directory of Open Access Journals (Sweden)

    Jie Hou

    2017-07-01

    Full Text Available Three-dimensional (3D human skeleton extraction is a powerful tool for activity acquirement and analyses, spawning a variety of applications on somatosensory control, virtual reality and many prospering fields. However, the 3D human skeletonization relies heavily on RGB-Depth (RGB-D cameras, expensive wearable sensors and specific lightening conditions, resulting in great limitation of its outdoor applications. This paper presents a novel 3D human skeleton extraction method designed for the monocular camera large scale outdoor scenarios. The proposed algorithm aggregates spatial–temporal discrete joint positions extracted from human shadow on the ground. Firstly, the projected silhouette information is recovered from human shadow on the ground for each frame, followed by the extraction of two-dimensional (2D joint projected positions. Then extracted 2D joint positions are categorized into different sets according to activity silhouette categories. Finally, spatial–temporal integration of same-category 2D joint positions is carried out to generate 3D human skeletons. The proposed method proves accurate and efficient in outdoor human skeletonization application based on several comparisons with the traditional RGB-D method. Finally, the application of the proposed method to RGB-D skeletonization enhancement is discussed.

  20. A Two-Stage Bayesian Network Method for 3D Human Pose Estimation from Monocular Image Sequences

    Directory of Open Access Journals (Sweden)

    Wang Yuan-Kai

    2010-01-01

    Full Text Available Abstract This paper proposes a novel human motion capture method that locates human body joint position and reconstructs the human pose in 3D space from monocular images. We propose a two-stage framework including 2D and 3D probabilistic graphical models which can solve the occlusion problem for the estimation of human joint positions. The 2D and 3D models adopt directed acyclic structure to avoid error propagation of inference. Image observations corresponding to shape and appearance features of humans are considered as evidence for the inference of 2D joint positions in the 2D model. Both the 2D and 3D models utilize the Expectation Maximization algorithm to learn prior distributions of the models. An annealed Gibbs sampling method is proposed for the two-stage method to inference the maximum posteriori distributions of joint positions. The annealing process can efficiently explore the mode of distributions and find solutions in high-dimensional space. Experiments are conducted on the HumanEva dataset with image sequences of walking motion, which has challenges of occlusion and loss of image observations. Experimental results show that the proposed two-stage approach can efficiently estimate more accurate human poses.

  1. Electrotactile vision substitution for 3D trajectory following

    CERN Document Server

    Chekhchoukh, Abdessalem; Vuillerme, Nicolas; Payan, Yohan; Glade, Nicolas

    2013-01-01

    Navigation for blind persons represents a challenge for researchers in vision substitution. In this field, one of the used techniques to navigate is guidance. In this study, we develop a new approach for 3D trajectory following in which the requested task is to track a light path using computer input devices (keyboard and mouse) or a rigid body handled in front of a stereoscopic camera. The light path is visualized either on direct vision or by way of a electro-stimulation device, the Tongue Display Unit, a 12x12 matrix of electrodes. We improve our method by a series of experiments in which the effect of the modality of perception and that of the input device. Preliminary results indicated a close correlation between the stimulated and recorded trajectories.

  2. A monocular vision system based on cooperative targets detection for aircraft pose measurement

    Science.gov (United States)

    Wang, Zhenyu; Wang, Yanyun; Cheng, Wei; Chen, Tao; Zhou, Hui

    2017-08-01

    In this paper, a monocular vision measurement system based on cooperative targets detection is proposed, which can capture the three-dimensional information of objects by recognizing the checkerboard target and calculating of the feature points. The aircraft pose measurement is an important problem for aircraft’s monitoring and control. Monocular vision system has a good performance in the range of meter. This paper proposes an algorithm based on coplanar rectangular feature to determine the unique solution of distance and angle. A continuous frame detection method is presented to solve the problem of corners’ transition caused by symmetry of the targets. Besides, a displacement table test system based on three-dimensional precision and measurement system human-computer interaction software has been built. Experiment result shows that it has a precision of 2mm in the range of 300mm to 1000mm, which can meet the requirement of the position measurement in the aircraft cabin.

  3. A Novel Ship-Bridge Collision Avoidance System Based on Monocular Computer Vision

    Directory of Open Access Journals (Sweden)

    Yuanzhou Zheng

    2013-06-01

    Full Text Available The study aims to investigate the ship-bridge collision avoidance. A novel system for ship-bridge collision avoidance based on monocular computer vision is proposed in this study. In the new system, the moving ships are firstly captured by the video sequences. Then the detection and tracking of the moving objects have been done to identify the regions in the scene that correspond to the video sequences. Secondly, the quantity description of the dynamic states of the moving objects in the geographical coordinate system, including the location, velocity, orientation, etc, has been calculated based on the monocular vision geometry. Finally, the collision risk is evaluated and consequently the ship manipulation commands are suggested, aiming to avoid the potential collision. Both computer simulation and field experiments have been implemented to validate the proposed system. The analysis results have shown the effectiveness of the proposed system.

  4. Measuring method for the object pose based on monocular vision technology

    Science.gov (United States)

    Sun, Changku; Zhang, Zimiao; Wang, Peng

    2010-11-01

    Position and orientation estimation of the object, which can be widely applied in the fields as robot navigation, surgery, electro-optic aiming system, etc, has an important value. The monocular vision positioning algorithm which is based on the point characteristics is studied and new measurement method is proposed in this paper. First, calculate the approximate coordinates of the five reference points which can be used as the initial value of iteration in the camera coordinate system according to weakp3p; Second, get the exact coordinates of the reference points in the camera coordinate system through iterative calculation with the constraints relationship of the reference points; Finally, get the position and orientation of the object. So the measurement model of monocular vision is constructed. In order to verify the accuracy of measurement model, a plane target using infrared LED as reference points is designed to finish the verification of the measurement method and the corresponding image processing algorithm is studied. And then The monocular vision experimental system is established. Experimental results show that the translational positioning accuracy reaches +/-0.05mm and rotary positioning accuracy reaches +/-0.2o .

  5. Robust 3D Object Tracking from Monocular Images using Stable Parts.

    Science.gov (United States)

    Crivellaro, Alberto; Rad, Mahdi; Verdie, Yannick; Yi, Kwang Moo; Fua, Pascal; Lepetit, Vincent

    2017-05-26

    We present an algorithm for estimating the pose of a rigid object in real-time under challenging conditions. Our method effectively handles poorly textured objects in cluttered, changing environments, even when their appearance is corrupted by large occlusions, and it relies on grayscale images to handle metallic environments on which depth cameras would fail. As a result, our method is suitable for practical Augmented Reality applications including industrial environments. At the core of our approach is a novel representation for the 3D pose of object parts: We predict the 3D pose of each part in the form of the 2D projections of a few control points. The advantages of this representation is three-fold: We can predict the 3D pose of the object even when only one part is visible; when several parts are visible, we can easily combine them to compute a better pose of the object; the 3D pose we obtain is usually very accurate, even when only few parts are visible. We show how to use this representation in a robust 3D tracking framework. In addition to extensive comparisons with the state-of-the-art, we demonstrate our method on a practical Augmented Reality application for maintenance assistance in the ATLAS particle detector at CERN.

  6. Monocular zones in stereoscopic scenes: A useful source of information for human binocular vision?

    Science.gov (United States)

    Harris, Julie M.

    2010-02-01

    When an object is closer to an observer than the background, the small differences between right and left eye views are interpreted by the human brain as depth. This basic ability of the human visual system, called stereopsis, lies at the core of all binocular three-dimensional (3-D) perception and related technological display development. To achieve stereopsis, it is traditionally assumed that corresponding locations in the right and left eye's views must first be matched, then the relative differences between right and left eye locations are used to calculate depth. But this is not the whole story. At every object-background boundary, there are regions of the background that only one eye can see because, in the other eye's view, the foreground object occludes that region of background. Such monocular zones do not have a corresponding match in the other eye's view and can thus cause problems for depth extraction algorithms. In this paper I will discuss evidence, from our knowledge of human visual perception, illustrating that monocular zones do not pose problems for our human visual systems, rather, our visual systems can extract depth from such zones. I review the relevant human perception literature in this area, and show some recent data aimed at quantifying the perception of depth from monocular zones. The paper finishes with a discussion of the potential importance of considering monocular zones, for stereo display technology and depth compression algorithms.

  7. 3-D measuring of engine camshaft based on machine vision

    Science.gov (United States)

    Qiu, Jianxin; Tan, Liang; Xu, Xiaodong

    2008-12-01

    The non-touch 3D measuring based on machine vision is introduced into camshaft precise measuring. Currently, because CCD 3-dimensional measuring can't meet requirements for camshaft's measuring precision, it's necessary to improve its measuring precision. In this paper, we put forward a method to improve the measuring method. A Multi-Character Match method based on the Polygonal Non-regular model is advanced with the theory of Corner Extraction and Corner Matching .This method has solved the problem of the matching difficulty and a low precision. In the measuring process, the use of the Coded marked Point method and Self-Character Match method can bring on this problem. The 3D measuring experiment on camshaft, which based on the Multi-Character Match method of the Polygonal Non-regular model, proves that the normal average measuring precision is increased to a new level less than 0.04mm in the point-clouds photo merge. This measuring method can effectively increase the 3D measuring precision of the binocular CCD.

  8. Fast vision-based catheter 3D reconstruction

    Science.gov (United States)

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.

    2016-07-01

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.

  9. Mobile Target Tracking Based on Hybrid Open-Loop Monocular Vision Motion Control Strategy

    Directory of Open Access Journals (Sweden)

    Cao Yuan

    2015-01-01

    Full Text Available This paper proposes a new real-time target tracking method based on the open-loop monocular vision motion control. It uses the particle filter technique to predict the moving target’s position in an image. Due to the properties of the particle filter, the method can effectively master the motion behaviors of the linear and nonlinear. In addition, the method uses the simple mathematical operation to transfer the image information in the mobile target to its real coordinate information. Therefore, it requires few operating resources. Moreover, the method adopts the monocular vision approach, which is a single camera, to achieve its objective by using few hardware resources. Firstly, the method evaluates the next time’s position and size of the target in an image. Later, the real position of the objective corresponding to the obtained information is predicted. At last, the mobile robot should be controlled in the center of the camera’s vision. The paper conducts the tracking test to the L-type and the S-type and compares with the Kalman filtering method. The experimental results show that the method achieves a better tracking effect in the L-shape experiment, and its effect is superior to the Kalman filter technique in the L-type or S-type tracking experiment.

  10. Cataract surgery: emotional reactions of patients with monocular versus binocular vision

    Directory of Open Access Journals (Sweden)

    Roberta Ferrari Marback

    2012-12-01

    Full Text Available PURPOSE: To analyze emotional reactions related to cataract surgery in two groups of patients (monocular vision - Group 1; binocular vision - Group 2. METHODS: A transversal comparative study was performed using a structured questionnaire from a previous exploratory study before cataract surgery. RESULTS: 206 patients were enrolled in the study, 96 individuals in Group 1 (69.3 ± 10.4 years and 110 in Group 2 (68.2 ± 10.2 years. Most patients in group 1 (40.6% and 22.7% of group 2, reported fear of surgery (p<0.001. The most important causes of fear were: possibility of blindness, ocular complications and death during surgery. The most prevalent feelings among the groups were doubts about good results and nervousness. CONCLUSION: Patients with monocular vision reported more fear and doubts related to surgical outcomes. Thus, it is necessary that phisycians considers such emotional reactions and invest more time than usual explaining the risks and the benefits of cataract surgery.Ouvir

  11. Monocular feature tracker for low-cost stereo vision control of an autonomous guided vehicle (AGV)

    Science.gov (United States)

    Pearson, Chris M.; Probert, Penelope J.

    1994-02-01

    We describe a monocular feature tracker (MFT), the first stage of a low cost stereoscopic vision system for use on an autonomous guided vehicle (AGV) in an indoor environment. The system does not require artificial markings or other beacons, but relies upon accurate knowledge of the AGV motion. Linear array cameras (LAC) are used to reduce the data and processing bandwidths. The limited information given by LAC require modelling of the expected features. We model an obstacle as a vertical line segment touching the floor, and can distinguish between these obstacles and most other clutter in an image sequence. Detection of these obstacles is sufficient information for local AGV navigation.

  12. Control monocular 3D dinámico basado en imagen

    Directory of Open Access Journals (Sweden)

    Luis Hernández Santana

    2011-09-01

    Full Text Available Normal 0 21 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} En este trabajo se presenta un sistema de control servovisual para regulación de posición de un robot manipulador con cámara en mano que se mueve en el espacio cartesiano 3D. El objetivo es control el robot de tal forma que la imagen de una esfera en movimiento se mantenga en el centro del plano imagen con radio constante. Se propone una estrategia de control con dos lazos en cascada, el lazo interno resuelve el control articular y el lazo externo implementa el control con realimentación visual. El robot y el sistema de visión son modelados para pequeñas variaciones alrededor del punto de operación para control de posición. Para estas condiciones se muestran la estabilidad del sistema y la respuesta en estado estable para trayectorias del objeto. Para ilustrar las prestaciones del sistema, se presentan los resultados experimentales para un manipulador ASEAIRB6.

  13. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    Directory of Open Access Journals (Sweden)

    Ki-Yeong Park

    2014-01-01

    Full Text Available We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.

  14. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  15. 3-D Signal Processing in a Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  16. Weight prediction of broiler chickens using 3D computer vision

    DEFF Research Database (Denmark)

    Mortensen, Anders Krogh; Lisouski, Pavel; Ahrendt, Peter

    2016-01-01

    a platform weigher which may also include ill birds. In the current study, a fully-automatic 3D camera-based weighing system for broilers have been developed and evaluated in a commercial production environment. Specifically, a low-cost 3D camera (Kinect) that directly returned a depth image was employed...

  17. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Science.gov (United States)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  18. Detection and Tracking Strategies for Autonomous Aerial Refuelling Tasks Based on Monocular Vision

    Directory of Open Access Journals (Sweden)

    Yingjie Yin

    2014-07-01

    Full Text Available Detection and tracking strategies based on monocular vision are proposed for autonomous aerial refuelling tasks. The drogue attached to the fuel tanker aircraft has two important features. The grey values of the drogue's inner part are different from the external umbrella ribs, as shown in the image. The shape of the drogue's inner dark part is nearly circular. According to crucial prior knowledge, the rough and fine positioning algorithms are designed to detect the drogue. Particle filter based on the drogue's shape is proposed to track the drogue. A strategy to switch between detection and tracking is proposed to improve the robustness of the algorithms. The inner dark part of the drogue is segmented precisely in the detecting and tracking process and the segmented circular part can be used to measure its spatial position. The experimental results show that the proposed method has good performance in real-time and satisfied robustness and positioning accuracy.

  19. Indoor Mobile Robot Navigation by Central Following Based on Monocular Vision

    Science.gov (United States)

    Saitoh, Takeshi; Tada, Naoya; Konishi, Ryosuke

    This paper develops the indoor mobile robot navigation by center following based on monocular vision. In our method, based on the frontal image, two boundary lines between the wall and baseboard are detected. Then, the appearance based obstacle detection is applied. When the obstacle exists, the avoidance or stop movement is worked according to the size and position of the obstacle, and when the obstacle does not exist, the robot moves at the center of the corridor. We developed the wheelchair based mobile robot. We estimated the accuracy of the boundary line detection, and obtained fast processing speed and high detection accuracy. We demonstrate the effectiveness of our mobile robot by the stopping experiments with various obstacles and moving experiments.

  20. Cataract surgery: emotional reactions of patients with monocular versus binocular vision Cirurgia de catarata: aspectos emocionais de pacientes com visão monocular versus binocular

    Directory of Open Access Journals (Sweden)

    Roberta Ferrari Marback

    2012-12-01

    Full Text Available PURPOSE: To analyze emotional reactions related to cataract surgery in two groups of patients (monocular vision - Group 1; binocular vision - Group 2. METHODS: A transversal comparative study was performed using a structured questionnaire from a previous exploratory study before cataract surgery. RESULTS: 206 patients were enrolled in the study, 96 individuals in Group 1 (69.3 ± 10.4 years and 110 in Group 2 (68.2 ± 10.2 years. Most patients in group 1 (40.6% and 22.7% of group 2, reported fear of surgery (pOBJETIVO: Verificar reações emocionais relacionadas à cirurgia de catarata entre pacientes com visão monocular (Grupo 1 e binocular (Grupo 2. MÉTODOS: Foi realizado um estudo tranversal, comparativo por meio de um questionário estruturado respondido por pacientes antes da cirurgia de catarata. RESULTADOS: A amostra foi composta de 96 pacientes no Grupo 1 (69.3 ± 10.4 anos e 110 no Grupo 2 (68.2 ± 10.2 anos. Consideravam apresentar medo da cirugia 40.6% do Grupo 1 e 22.7% do Grupo 2 (p<0.001 e entre as principais causas do medo, a possibilidade de perda da visão, complicações cirúrgicas e a morte durante o procedimento foram apontadas. Os sentimentos mais comuns entre os dois grupos foram dúvidas a cerca dos resultados da cirurgia e o nervosismo diante do procedimento. CONCLUSÃO: Pacientes com visão monocular apresentaram mais medo e dúvidas relacionadas à cirurgia de catarata comparados com aqueles com visão binocular. Portanto, é necessário que os médicos considerem estas reações emocionais e invistam mais tempo para esclarecer os riscos e benefícios da cirurgia de catarata.

  1. Inverse problems in vision and 3D tomography

    CERN Document Server

    Mohamad-Djafari, Ali

    2013-01-01

    The concept of an inverse problem is a familiar one to most scientists and engineers, particularly in the field of signal and image processing, imaging systems (medical, geophysical, industrial non-destructive testing, etc.) and computer vision. In imaging systems, the aim is not just to estimate unobserved images, but also their geometric characteristics from observed quantities that are linked to these unobserved quantities through the forward problem. This book focuses on imagery and vision problems that can be clearly written in terms of an inverse problem where an estimate for the image a

  2. Enhanced 3D face processing using an active vision system

    DEFF Research Database (Denmark)

    Lidegaard, Morten; Larsen, Rasmus; Kraft, Dirk

    2014-01-01

    We present an active face processing system based on 3D shape information extracted by means of stereo information. We use two sets of stereo cameras with different field of views (FOV): One with a wide FOV is used for face tracking, while the other with a narrow FOV is used for face identificati...

  3. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles.

    Science.gov (United States)

    Huang, Kuo-Lung; Chiu, Chung-Cheng; Chiu, Sheng-Yi; Teng, Yao-Jen; Hao, Shu-Sheng

    2015-07-13

    The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs) has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft's nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft's nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  4. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Kuo-Lung Huang

    2015-07-01

    Full Text Available The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft’s nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft’s nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  5. Enhanced 3D face processing using an active vision system

    DEFF Research Database (Denmark)

    Lidegaard, Morten; Larsen, Rasmus; Kraft, Dirk;

    2014-01-01

    of the narrow FOV camera. We substantiate these two observations by qualitative results on face reconstruction and quantitative results on face recognition. As a consequence, such a set-up allows to achieve better and much more flexible system for 3D face reconstruction e.g. for recognition or emotion......We present an active face processing system based on 3D shape information extracted by means of stereo information. We use two sets of stereo cameras with different field of views (FOV): One with a wide FOV is used for face tracking, while the other with a narrow FOV is used for face identification....... We argue for two advantages of such a system: First, an extended work range, and second, the possibility to place the narrow FOV camera in a way such that a much better reconstruction quality can be achieved compared to a static camera even if the face had been fully visible in the periphery...

  6. Vision based error detection for 3D printing processes

    Directory of Open Access Journals (Sweden)

    Baumann Felix

    2016-01-01

    Full Text Available 3D printers became more popular in the last decade, partly because of the expiration of key patents and the supply of affordable machines. The origin is located in rapid prototyping. With Additive Manufacturing (AM it is possible to create physical objects from 3D model data by layer wise addition of material. Besides professional use for prototyping and low volume manufacturing they are becoming widespread amongst end users starting with the so called Maker Movement. The most prevalent type of consumer grade 3D printers is Fused Deposition Modelling (FDM, also Fused Filament Fabrication FFF. This work focuses on FDM machinery because of their widespread occurrence and large number of open problems like precision and failure. These 3D printers can fail to print objects at a statistical rate depending on the manufacturer and model of the printer. Failures can occur due to misalignment of the print-bed, the print-head, slippage of the motors, warping of the printed material, lack of adhesion or other reasons. The goal of this research is to provide an environment in which these failures can be detected automatically. Direct supervision is inhibited by the recommended placement of FDM printers in separate rooms away from the user due to ventilation issues. The inability to oversee the printing process leads to late or omitted detection of failures. Rejects effect material waste and wasted time thus lowering the utilization of printing resources. Our approach consists of a camera based error detection mechanism that provides a web based interface for remote supervision and early failure detection. Early failure detection can lead to reduced time spent on broken prints, less material wasted and in some cases salvaged objects.

  7. 3D Vision on Mars: Stereo processing and visualizations for NASA and ESA rover missions

    Science.gov (United States)

    Huber, Ben

    2016-07-01

    Three dimensional (3D) vision processing is an essential component of planetary rover mission planning and scientific data analysis. Standard ground vision processing products are digital terrain maps, panoramas, and virtual views of the environment. Such processing is currently developed for the PanCam instrument of ESA's ExoMars Rover mission by the PanCam 3D Vision Team under JOANNEUM RESEARCH coordination. Camera calibration, quality estimation of the expected results and the interfaces to other mission elements such as operations planning, rover navigation system and global Mars mapping are a specific focus of the current work. The main goals of the 3D Vision team in this context are: instrument design support & calibration processing: Development of 3D vision functionality Visualization: development of a 3D visualization tool for scientific data analysis. 3D reconstructions from stereo image data during the mission Support for 3D scientific exploitation to characterize the overall landscape geomorphology, processes, and the nature of the geologic record using the reconstructed 3D models. The developed processing framework PRoViP establishes an extensible framework for 3D vision processing in planetary robotic missions. Examples of processing products and capabilities are: Digital Terrain Models, Ortho images, 3D meshes, occlusion, solar illumination-, slope-, roughness-, and hazard-maps. Another important processing capability is the fusion of rover and orbiter based images with the support of multiple missions and sensors (e.g. MSL Mastcam stereo processing). For 3D visualization a tool called PRo3D has been developed to analyze and directly interpret digital outcrop models. Stereo image products derived from Mars rover data can be rendered in PRo3D, enabling the user to zoom, rotate and translate the generated 3D outcrop models. Interpretations can be digitized directly onto the 3D surface, and simple measurements of the outcrop and sedimentary features

  8. 3D vision accelerates laparoscopic proficiency and skills are transferable to 2D conditions

    DEFF Research Database (Denmark)

    Sørensen, Stine Maya Dreier; Konge, Lars; Bjerrum, Flemming

    2017-01-01

    BACKGROUND: Laparoscopy is difficult to master, in part because surgeons operate in a three-dimensional (3D) space guided by two-dimensional (2D) images. This trial explores the effect of 3D vision during a laparoscopic training program, and examine whether it is possible to transfer skills......: Mean training time were reduced in the intervention group; 231 min versus 323 min; P = 0.012. There was no significant difference in the mean times to completion of the retention test; 92 min versus 95 min; P = 0.85. CONCLUSION: 3D vision reduced time to proficiency on a virtual-reality laparoscopy...... simulator. Furthermore, skills learned with 3D vision can be transferred to 2D vision conditions. Clinicaltrials.gov (NCT02361463)....

  9. New 3D vision of magnetic reconnection revealed

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    @@ An international consortium led by astronomers from CAS and Peking University recently made the first satellite observation of the full three-dimensional (3D) geometry structure of magnetic reconnection, a process whereby the lines of a complex magnetic field break and reconnect to alter its structure drastically. Their work was published in the September issue of Nature Physics. Experts say that this pioneering discovery will help construct theoretical models of magnetic reconnection, a universal phenomenon in space related to star formation, solar explosions and the entry of solar wind energy into the near-Earth environment.

  10. Omnidirectional vision systems calibration, feature extraction and 3D information

    CERN Document Server

    Puig, Luis

    2013-01-01

    This work focuses on central catadioptric systems, from the early step of calibration to high-level tasks such as 3D information retrieval. The book opens with a thorough introduction to the sphere camera model, along with an analysis of the relation between this model and actual central catadioptric systems. Then, a new approach to calibrate any single-viewpoint catadioptric camera is described.  This is followed by an analysis of existing methods for calibrating central omnivision systems, and a detailed examination of hybrid two-view relations that combine images acquired with uncalibrated

  11. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss.

    Science.gov (United States)

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960's on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research.

  12. Ground truth evaluation of computer vision based 3D reconstruction of synthesized and real plant images

    DEFF Research Database (Denmark)

    Nielsen, Michael; Andersen, Hans Jørgen; Slaughter, David

    2007-01-01

    There is an increasing interest in using 3D computer vision in precision agriculture. This calls for better quantitative evaluation and understanding of computer vision methods. This paper proposes a test framework using ray traced crop scenes that allows in-depth analysis of algorithm performance...

  13. How the venetian blind percept emerges from the laminar cortical dynamics of 3D vision

    National Research Council Canada - National Science Library

    Cao, Yongqiang; Grossberg, Stephen

    2014-01-01

    The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and to show how it is related to other well-known perceptual...

  14. Using Multi-Modal 3D Contours and Their Relations for Vision and Robotics

    DEFF Research Database (Denmark)

    Baseski, Emre; Pugeault, Nicolas; Kalkan, Sinan

    2010-01-01

    In this work, we make use of 3D contours and relations between them (namely, coplanarity, cocolority, distance and angle) for four different applications in the area of computer vision and vision-based robotics. Our multi-modal contour representation covers both geometric and appearance information....... We show the potential of reasoning with global entities in the context of visual scene analysis for driver assistance, depth prediction, robotic grasping and grasp learning. We argue that, such 3D global reasoning processes complement widely-used 2D local approaches such as bag-of-features since 3D...

  15. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

    Science.gov (United States)

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”

    2016-01-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540

  16. A Height Estimation Approach for Terrain Following Flights from Monocular Vision

    Directory of Open Access Journals (Sweden)

    Igor S. G. Campos

    2016-12-01

    Full Text Available In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.

  17. A Height Estimation Approach for Terrain Following Flights from Monocular Vision.

    Science.gov (United States)

    Campos, Igor S G; Nascimento, Erickson R; Freitas, Gustavo M; Chaimowicz, Luiz

    2016-12-06

    In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.

  18. Acute Myeloid Leukemia Relapse Presenting as Complete Monocular Vision Loss due to Optic Nerve Involvement

    Directory of Open Access Journals (Sweden)

    Shyam A. Patel

    2016-01-01

    Full Text Available Acute myeloid leukemia (AML involvement of the central nervous system is relatively rare, and detection of leptomeningeal disease typically occurs only after a patient presents with neurological symptoms. The case herein describes a 48-year-old man with relapsed/refractory AML of the mixed lineage leukemia rearrangement subtype, who presents with monocular vision loss due to leukemic eye infiltration. MRI revealed right optic nerve sheath enhancement and restricted diffusion concerning for nerve ischemia and infarct from hypercellularity. Cerebrospinal fluid (CSF analysis showed a total WBC count of 81/mcl with 96% AML blasts. The onset and progression of visual loss were in concordance with rise in peripheral blood blast count. A low threshold for diagnosis of CSF involvement should be maintained in patients with hyperleukocytosis and high-risk cytogenetics so that prompt treatment with whole brain radiation and intrathecal chemotherapy can be delivered. This case suggests that the eye, as an immunoprivileged site, may serve as a sanctuary from which leukemic cells can resurge and contribute to relapsed disease in patients with high-risk cytogenetics.

  19. A Height Estimation Approach for Terrain Following Flights from Monocular Vision

    Science.gov (United States)

    Campos, Igor S. G.; Nascimento, Erickson R.; Freitas, Gustavo M.; Chaimowicz, Luiz

    2016-01-01

    In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80% for positives and 90% for negatives, while the height estimation algorithm presented good accuracy. PMID:27929424

  20. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

    Directory of Open Access Journals (Sweden)

    Tae-Jae Lee

    2016-03-01

    Full Text Available This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.

  1. 3D Machine Vision and Additive Manufacturing: Concurrent Product and Process Development

    Science.gov (United States)

    Ilyas, Ismet P.

    2013-06-01

    The manufacturing environment rapidly changes in turbulence fashion. Digital manufacturing (DM) plays a significant role and one of the key strategies in setting up vision and strategic planning toward the knowledge based manufacturing. An approach of combining 3D machine vision (3D-MV) and an Additive Manufacturing (AM) may finally be finding its niche in manufacturing. This paper briefly overviews the integration of the 3D machine vision and AM in concurrent product and process development, the challenges and opportunities, the implementation of the 3D-MV and AM at POLMAN Bandung in accelerating product design and process development, and discusses a direct deployment of this approach on a real case from our industrial partners that have placed this as one of the very important and strategic approach in research as well as product/prototype development. The strategic aspects and needs of this combination approach in research, design and development are main concerns of the presentation.

  2. Impact of 3D vision on mental workload and laparoscopic performance in inexperienced subjects.

    Science.gov (United States)

    Gómez-Gómez, E; Carrasco-Valiente, J; Valero-Rosa, J; Campos-Hernández, J P; Anglada-Curado, F J; Carazo-Carazo, J L; Font-Ugalde, P; Requena-Tapia, M J

    2015-05-01

    To assess the effect of vision in three dimensions (3D) versus two dimensions (2D) on mental workload and laparoscopic performance during simulation-based training. A prospective, randomized crossover study on inexperienced students in operative laparoscopy was conducted. Forty-six candidates executed five standardized exercises on a pelvitrainer with both vision systems (3D and 2D). Laparoscopy performance was assessed using the total time (in seconds) and the number of failed attempts. For workload assessment, the validated NASA-TLX questionnaire was administered. 3D vision improves the performance reducing the time (3D = 1006.08 ± 315.94 vs. 2D = 1309.17 ± 300.28; P < .001) and the total number of failed attempts (3D = .84 ± 1.26 vs. 2D = 1.86 ± 1.60; P < .001). For each exercise, 3D vision also shows better performance times: "transfer objects" (P = .001), "single knot" (P < .001), "clip and cut" (P < .05), and "needle guidance" (P < .001). Besides, according to the NASA-TLX results, less mental workload is experienced with the use of 3D (P < .001). However, 3D vision was associated with greater visual impairment (P < .01) and headaches (P < .05). The incorporation of 3D systems in laparoscopic training programs would facilitate the acquisition of laparoscopic skills, because they reduce mental workload and improve the performance on inexperienced surgeons. However, some undesirable effects such as visual discomfort or headache are identified initially. Copyright © 2014 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  3. Vision processing for realtime 3-D data acquisition based on coded structured light.

    Science.gov (United States)

    Chen, S Y; Li, Y F; Zhang, Jianwei

    2008-02-01

    Structured light vision systems have been successfully used for accurate measurement of 3-D surfaces in computer vision. However, their applications are mainly limited to scanning stationary objects so far since tens of images have to be captured for recovering one 3-D scene. This paper presents an idea for real-time acquisition of 3-D surface data by a specially coded vision system. To achieve 3-D measurement for a dynamic scene, the data acquisition must be performed with only a single image. A principle of uniquely color-encoded pattern projection is proposed to design a color matrix for improving the reconstruction efficiency. The matrix is produced by a special code sequence and a number of state transitions. A color projector is controlled by a computer to generate the desired color patterns in the scene. The unique indexing of the light codes is crucial here for color projection since it is essential that each light grid be uniquely identified by incorporating local neighborhoods so that 3-D reconstruction can be performed with only local analysis of a single image. A scheme is presented to describe such a vision processing method for fast 3-D data acquisition. Practical experimental performance is provided to analyze the efficiency of the proposed methods.

  4. Efficient Learning of VAM-Based Representation of 3D Targets and its Active Vision Applications.

    Science.gov (United States)

    Sharma, Rajeev; Srinivasa, Narayan

    1998-01-01

    There has been a considerable interest in using active vision for various applications. This interest is primarily because active vision can enhance machine vision capabilities by dynamically changing the camera parameters based on the content of the scene. An important issue in active vision is that of representing 3D targets in a manner that is invariant to changing camera configurations. This paper addresses this representation issue for a robotic active vision system. An efficient Vector Associative Map (VAM)-based learning scheme is proposed to learn a joint-based representation. Computer simulations and experiments are first performed to evaluate the effectiveness of this scheme using the University of Illinois Active Vision System (UIAVS). The invariance property of the learned representation is then exploited to develop several robotic applications. These include, detecting moving targets, saccade control, planning saccade sequences and controlling a robot manipulator.

  5. A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system

    Science.gov (United States)

    Ge, Zhuo; Zhu, Ying; Liang, Guanhao

    2017-01-01

    To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.

  6. Using Multi-Modal 3D Contours and Their Relations for Vision and Robotics

    DEFF Research Database (Denmark)

    Baseski, Emre; Pugeault, Nicolas; Kalkan, Sinan

    2010-01-01

    In this work, we make use of 3D contours and relations between them (namely, coplanarity, cocolority, distance and angle) for four different applications in the area of computer vision and vision-based robotics. Our multi-modal contour representation covers both geometric and appearance information....... We show the potential of reasoning with global entities in the context of visual scene analysis for driver assistance, depth prediction, robotic grasping and grasp learning. We argue that, such 3D global reasoning processes complement widely-used 2D local approaches such as bag-of-features since 3D...... relations are invariant under camera transformations and 3D information can be directly linked to actions. We therefore stress the necessity of including both global and local features with different spatial dimensions within a representation. We also discuss the importance of an efficient use...

  7. An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

    Directory of Open Access Journals (Sweden)

    Yi-Ting Chen

    2015-05-01

    Full Text Available Robot decision making and motion control are commonly based on visual information in various applications. Position-based visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedicated hardware circuits for processing vision processing and motion control functions. This research conducts a preliminary study to explore the integration of 3D vision and robot motion control system design based on a single field programmable gate array (FPGA chip. The implemented motion-vision embedded system performs the following functions: filtering, image statistics, binary morphology, binary object analysis, object 3D position calculation, robot inverse kinematics, velocity profile generation, feedback counting, and multiple-axes position feedback control.

  8. Obstacle avoidance using predictive vision based on a dynamic 3D world model

    Science.gov (United States)

    Benjamin, D. Paul; Lyons, Damian; Achtemichuk, Tom

    2006-10-01

    We have designed and implemented a fast predictive vision system for a mobile robot based on the principles of active vision. This vision system is part of a larger project to design a comprehensive cognitive architecture for mobile robotics. The vision system represents the robot's environment with a dynamic 3D world model based on a 3D gaming platform (Ogre3D). This world model contains a virtual copy of the robot and its environment, and outputs graphics showing what the virtual robot "sees" in the virtual world; this is what the real robot expects to see in the real world. The vision system compares this output in real time with the visual data. Any large discrepancies are flagged and sent to the robot's cognitive system, which constructs a plan for focusing on the discrepancies and resolving them, e.g. by updating the position of an object or by recognizing a new object. An object is recognized only once; thereafter its observed data are monitored for consistency with the predictions, greatly reducing the cost of scene understanding. We describe the implementation of this vision system and how the robot uses it to locate and avoid obstacles.

  9. 3D Perception of Biomimetic Eye Based on Motion Vision and Stereo Vision%仿生眼运动视觉与立体视觉3维感知

    Institute of Scientific and Technical Information of China (English)

    王庆滨; 邹伟; 徐德; 张峰

    2015-01-01

    In order to overcome the narrow visual field of binocular vision and the low precision of monocular vision, a binocular biomimetic eye platform with 4 rotational degrees of freedom is designed based on the structural characteristics of human eyes, so that the robot can achieve human-like environment perception with binocular stereo vision and monoc-ular motion vision. Initial location and parameters calibration of the biomimetic eye platform are accomplished based on the vision alignment strategy and hand-eye calibration. The methods of binocular stereo perception and monocular motion stereo perception are given based on the dynamically changing external parameters. The former perceives the 3D information through the two images obtained by two cameras in real-time and their relative posture, and the latter perceives the 3D infor-mation by synthesize multiple images obtained by one camera and its corresponding postures at multiple adjacent moments. Experimental results shows that the relative perception accuracy of binocular vision is 0.38% and the relative perception accuracy of monocular motion vision is 0.82%. In conclusion, the method proposed can broaden the field of binocular vision, and ensure the accuracy of binocular perception and monocular motion perception.%为使机器人同时具备双目立体视觉和单目运动视觉的仿人化环境感知能力,克服双目视场狭窄、单目深度感知精度低的缺陷,本文基于人眼结构特点,设计了一个具有4个旋转自由度的双目仿生眼平台,并分别基于视觉对准策略和手眼标定技术实现了该平台的初始定位和参数标定.给出了基于外部参数动态变化的双目立体感知方法和单目运动立体感知方法,前者通过两架摄像机实时获取的图像信息以及摄像机相对位姿信息进行3维感知,后者综合利用单个摄像机在多个相邻时刻获取的多个图像及其对应姿态进行3维感知.实验结果中的双目

  10. Student performance and appreciation using 3D vs. 2D vision in a virtual learning environment

    NARCIS (Netherlands)

    de Boer, I.R.; Wesselink, P.R.; Vervoorn, J.M.

    2016-01-01

    Aim The aim of this study was to investigate the differences in the performance and appreciation of students working in a virtual learning environment with two (2D)- or three (3D)-dimensional vision. Material and methods One hundred and twenty-four randomly divided first-year dental students

  11. Student performance and appreciation using 3D vs. 2D vision in a virtual learning environment

    NARCIS (Netherlands)

    de Boer, I.R.; Wesselink, P.R.; Vervoorn, J.M.

    2016-01-01

    Aim The aim of this study was to investigate the differences in the performance and appreciation of students working in a virtual learning environment with two (2D)- or three (3D)-dimensional vision. Material and methods One hundred and twenty-four randomly divided first-year dental students perform

  12. Effects of brief daily periods of unrestricted vision during early monocular form deprivation on development of visual area 2.

    Science.gov (United States)

    Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Harwerth, Ronald S; Smith, Earl L; Chino, Yuzo M

    2011-09-14

    Providing brief daily periods of unrestricted vision during early monocular form deprivation reduces the depth of amblyopia. To gain insights into the neural basis of the beneficial effects of this treatment, the binocular and monocular response properties of neurons were quantitatively analyzed in visual area 2 (V2) of form-deprived macaque monkeys. Beginning at 3 weeks of age, infant monkeys were deprived of clear vision in one eye for 12 hours every day until 21 weeks of age. They received daily periods of unrestricted vision for 0, 1, 2, or 4 hours during the form-deprivation period. After behavioral testing to measure the depth of the resulting amblyopia, microelectrode-recording experiments were conducted in V2. The ocular dominance imbalance away from the affected eye was reduced in the experimental monkeys and was generally proportional to the reduction in the depth of amblyopia in individual monkeys. There were no interocular differences in the spatial properties of V2 neurons in any subject group. However, the binocular disparity sensitivity of V2 neurons was significantly higher and binocular suppression was lower in monkeys that had unrestricted vision. The decrease in ocular dominance imbalance in V2 was the neuronal change most closely associated with the observed reduction in the depth of amblyopia. The results suggest that the degree to which extrastriate neurons can maintain functional connections with the deprived eye (i.e., reducing undersampling for the affected eye) is the most significant factor associated with the beneficial effects of brief periods of unrestricted vision.

  13. Automatic 2D-to-3D video conversion by monocular depth cues fusion and utilizing human face landmarks

    Science.gov (United States)

    Fard, Mani B.; Bayazit, Ulug

    2013-12-01

    In this paper, we propose a hybrid 2D-to-3D video conversion system to recover the 3D structure of the scene. Depending on the scene characteristics, geometric or height depth information is adopted to form the initial depth map. This depth map is fused with color-based depth cues to construct the nal depth map of the scene background. The depths of the foreground objects are estimated after their classi cation into human and non-human regions. Speci cally, the depth of a non-human foreground object is directly calculated from the depth of the region behind it in the background. To acquire more accurate depth for the regions containing a human, the estimation of the distance between face landmarks is also taken into account. Finally, the computed depth information of the foreground regions is superimposed on the background depth map to generate the complete depth map of the scene which is the main goal in the process of converting 2D video to 3D.

  14. 3D computer vision using Point Grey Research stereo vision cameras

    Institute of Scientific and Technical Information of China (English)

    Don Murray; Vlad Tucakov; WEI Xiong

    2008-01-01

    This paper provides an introduction to stereo vision systems designed by point grey research and describes the possible application of these types of systems. The paper presents an overview of stereo vision techniques and outlines the critical aspects of putting together a system that can perform in the real world. It also provides an overview of how the cameras can be used to facilitate stereo research.

  15. Simultaneous perimeter measurement for 3D object with a binocular stereo vision measurement system

    Science.gov (United States)

    Peng, Zhao; Guo-Qiang, Ni

    2010-04-01

    A simultaneous measurement scheme for multiple three-dimensional (3D) objects' surface boundary perimeters is proposed. This scheme consists of three steps. First, a binocular stereo vision measurement system with two CCD cameras is devised to obtain the two images of the detected objects' 3D surface boundaries. Second, two geodesic active contours are applied to converge to the objects' contour edges simultaneously in the two CCD images to perform the stereo matching. Finally, the multiple spatial contours are reconstructed using the cubic B-spline curve interpolation. The true contour length of every spatial contour is computed as the true boundary perimeter of every 3D object. An experiment on the bent surface's perimeter measurement for the four 3D objects indicates that this scheme's measurement repetition error decreases to 0.7 mm.

  16. Structure light telecentric stereoscopic vision 3D measurement system based on Scheimpflug condition

    Science.gov (United States)

    Mei, Qing; Gao, Jian; Lin, Hui; Chen, Yun; Yunbo, He; Wang, Wei; Zhang, Guanjin; Chen, Xin

    2016-11-01

    We designed a new three-dimensional (3D) measurement system for micro components: a structure light telecentric stereoscopic vision 3D measurement system based on the Scheimpflug condition. This system creatively combines the telecentric imaging model and the Scheimpflug condition on the basis of structure light stereoscopic vision, having benefits of a wide measurement range, high accuracy, fast speed, and low price. The system measurement range is 20 mm×13 mm×6 mm, the lateral resolution is 20 μm, and the practical vertical resolution reaches 2.6 μm, which is close to the theoretical value of 2 μm and well satisfies the 3D measurement needs of micro components such as semiconductor devices, photoelectron elements, and micro-electromechanical systems. In this paper, we first introduce the principle and structure of the system and then present the system calibration and 3D reconstruction. We then present an experiment that was performed for the 3D reconstruction of the surface topography of a wafer, followed by a discussion. Finally, the conclusions are presented.

  17. A Computer Vision Method for 3D Reconstruction of Curves-Marked Free-Form Surfaces

    Institute of Scientific and Technical Information of China (English)

    Xiong Hanwei; Zhang Xiangwei

    2001-01-01

    Visual method is now broadly used in reverse engineering for 3D reconstruction. Thetraditional computer vision methods are feature-based, i.e., they require that the objects must revealfeatures owing to geometry or textures. For textureless free-form surfaces, dense feature points areadded artificially. In this paper, a new method is put forward combining computer vision with CAGD.The surface is subdivided into N-side Gregory patches using marked curves, and a stereo algorithm isused to reconstruct the curves. Then, the cross boundary tangent vector is computed throughreflectance analysis. At last, the whole surface can be reconstructed by jointing these patches withG1 continuity.

  18. Calibration procedure for 3D surface measurements using stereo vision and laser stripe

    OpenAIRE

    Vilaça, João L.; Fonseca, Jaime C.; Pinho, A. C. Marques de

    2006-01-01

    This paper proposes a new stereo vision calibration procedure and laser strip detection for 3D surface measurements. In this calibration procedure the laser plane is the one that matters, only one set of laser-coplanar calibration points is needed for image cameras calibration; and a dead- zone scan area is considered, since the igitalization arm is assembled in a 3 degree-freedom machine PC-based Motion Control with multiple scan paths. It is also presented some algorithms for 3D surface tre...

  19. Angle extended linear MEMS scanning system for 3D laser vision sensor

    Science.gov (United States)

    Pang, Yajun; Zhang, Yinxin; Yang, Huaidong; Zhu, Pan; Gai, Ye; Zhao, Jian; Huang, Zhanhua

    2016-09-01

    Scanning system is often considered as the most important part for 3D laser vision sensor. In this paper, we propose a method for the optical system design of angle extended linear MEMS scanning system, which has features of huge scanning degree, small beam divergence angle and small spot size for 3D laser vision sensor. The principle of design and theoretical formulas are derived strictly. With the help of software ZEMAX, a linear scanning optical system based on MEMS has been designed. Results show that the designed system can extend scanning angle from ±8° to ±26.5° with a divergence angle small than 3.5 mr, and the spot size is reduced for 4.545 times.

  20. Robust Stereo-Vision Based 3D Object Reconstruction for the Assistive Robot FRIEND

    Directory of Open Access Journals (Sweden)

    COJBASIC, Z.

    2011-11-01

    Full Text Available A key requirement of assistive robot vision is the robust 3D object reconstruction in complex environments for reliable autonomous object manipulation. In this paper the idea is presented of achieving high robustness of a complete robot vision system against external influences such as variable illumination by including feedback control of the object segmentation in stereo images. The approach used is to change the segmentation parameters in closed-loop so that object features extraction is driven to a desired result. Reliable feature extraction is necessary to fully exploit a neuro-fuzzy classifier which is the core of the proposed 2D object recognition method, predecessor of 3D object reconstruction. Experimental results on the rehabilitation assistive robotic system FRIEND demonstrate the effectiveness of the proposed method.

  1. 3D Vision Based Landing Control of a Small Scale Autonomous Helicopter

    OpenAIRE

    Zhenyu Yu; Kenzo Nonami; Jinok Shin; Demian Celestino

    2007-01-01

    Autonomous landing is a challenging but important task for Unmanned Aerial Vehicles (UAV) to achieve high level of autonomy. The fundamental requirement for landing is the knowledge of the height above the ground, and a properly designed controller to govern the process. This paper presents our research results in the study of landing an autonomous helicopter. The abovetheground height sensing is based on a 3D vision system. We have designed a simple planefitting method for e...

  2. Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    Science.gov (United States)

    Celik, Koray

    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.

  3. Robust object tracking techniques for vision-based 3D motion analysis applications

    Science.gov (United States)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  4. For 3D laparoscopy: a step toward advanced surgical navigation: how to get maximum benefit from 3D vision.

    Science.gov (United States)

    Kunert, Wolfgang; Storz, Pirmin; Kirschniak, Andreas

    2013-02-01

    The authors are grateful for the interesting perspectives given by Buchs and colleagues in their letter to the editor entitled "3D Laparoscopy: A Step Toward Advanced Surgical Navigation." Shutter-based 3D video systems failed to become established in the operating room in the late 1990s. To strengthen the starting conditions of the new 3D technology using better monitors and high definition, the authors give suggestions for its practical use in the clinical routine. But first they list the characteristics of single-channeled and bichanneled 3D laparoscopes and describe stereoscopic terms such as "comfort zone," "stereoscopic window," and "near-point distance." The authors believe it would be helpful to have the 3D pioneers assemble and share their experiences with these suggestions. Although this letter discusses "laparoscopy," it would also be interesting to collect experiences from other surgical disciplines, especially when one is considering whether to opt for bi- or single-channeled optics.

  5. 3D Vision Based Landing Control of a Small Scale Autonomous Helicopter

    Directory of Open Access Journals (Sweden)

    Zhenyu Yu

    2008-11-01

    Full Text Available Autonomous landing is a challenging but important task for Unmanned Aerial Vehicles (UAV to achieve high level of autonomy. The fundamental requirement for landing is the knowledge of the height above the ground, and a properly designed controller to govern the process. This paper presents our research results in the study of landing an autonomous helicopter. The abovetheground height sensing is based on a 3D vision system. We have designed a simple planefitting method for estimating the height over the ground. The method enables vibration free measurement with the camera rigidly attached on the helicopter without using complicated gimbal or active vision mechanism. The estimated height is used by the landing control loop. Considering the ground effect during landing, we have proposed a twostage landing procedure. Two controllers are designed for the two landing stages respectively. The sensing approach and control strategy has been verified in field flight test and has demonstrated satisfactory performance.

  6. Vision-Based Long-Range 3D Tracking, applied to Underground Surveying Tasks

    Science.gov (United States)

    Mossel, Annette; Gerstweiler, Georg; Vonach, Emanuel; Kaufmann, Hannes; Chmelina, Klaus

    2014-04-01

    To address the need of highly automated positioning systems in underground construction, we present a long-range 3D tracking system based on infrared optical markers. It provides continuous 3D position estimation of static or kinematic targets with low latency over a tracking volume of 12 m x 8 m x 70 m (width x height x depth). Over the entire volume, relative 3D point accuracy with a maximal deviation ≤ 22 mm is ensured with possible target rotations of yaw, pitch = 0 - 45° and roll = 0 - 360°. No preliminary sighting of target(s) is necessary since the system automatically locks onto a target without user intervention and autonomously starts tracking as soon as a target is within the view of the system. The proposed system needs a minimal hardware setup, consisting of two machine vision cameras and a standard workstation for data processing. This allows for quick installation with minimal disturbance of construction work. The data processing pipeline ensures camera calibration and tracking during on-going underground activities. Tests in real underground scenarios prove the system's capabilities to act as 3D position measurement platform for multiple underground tasks that require long range, low latency and high accuracy. Those tasks include simultaneously tracking of personnel, machines or robots.

  7. Estimation of 3D reconstruction errors in a stereo-vision system

    Science.gov (United States)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  8. Cartographie 3D et localisation par vision monoculaire pour la navignation autonome d'un robot mobile

    OpenAIRE

    Royer, Eric

    2006-01-01

    This thesis presents the realization of a localization system for a mobile robot relying on monocular vision. The aim of this project is to be able to make a robot follow a path in autonomous navigation in an urban environment. First, the robot is driven manually. During this learning step, the on board camera records a video sequence. After an off-line processing step, an image taken with the same hardware allows to compute the pose of the robot in real-time. This localization can be used to...

  9. Incorporating polarization in stereo vision-based 3D perception of non-Lambertian scenes

    Science.gov (United States)

    Berger, Kai; Voorhies, Randolph; Matthies, Larry

    2016-05-01

    Surfaces with specular, non-Lambertian reflectance are common in urban areas. Robot perception systems for applications in urban environments need to function effectively in the presence of such materials; however, both passive and active 3-D perception systems have difficulties with them. In this paper, we develop an approach using a stereo pair of polarization cameras to improve passive 3-D perception of specular surfaces. We use a commercial stereo camera pair with rotatable polarization filters in front of each lens to capture images with multiple orientations of the polarization filter. From these images, we estimate the degree of linear polarization (DOLP) and the angle of polarization (AOP) at each pixel in at least one camera. The AOP constrains the corresponding surface normal in the scene to lie in the plane of the observed angle of polarization. We embody this constraint an energy functional for a regularization-based stereo vision algorithm. This paper describes the theory of polarization needed for this approach, describes the new stereo vision algorithm, and presents results on synthetic and real images to evaluate performance.

  10. A method of real-time detection for distant moving obstacles by monocular vision

    Science.gov (United States)

    Jia, Bao-zhi; Zhu, Ming

    2013-12-01

    In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.

  11. Fractographic classification in metallic materials by using 3D processing and computer vision techniques

    Directory of Open Access Journals (Sweden)

    Maria Ximena Bastidas-Rodríguez

    2016-09-01

    Full Text Available Failure analysis aims at collecting information about how and why a failure is produced. The first step in this process is a visual inspection on the flaw surface that will reveal the features, marks, and texture, which characterize each type of fracture. This is generally carried out by personnel with no experience that usually lack the knowledge to do it. This paper proposes a classification method for three kinds of fractures in crystalline materials: brittle, fatigue, and ductile. The method uses 3D vision, and it is expected to support failure analysis. The features used in this work were: i Haralick’s features and ii the fractal dimension. These features were applied to 3D images obtained from a confocal laser scanning microscopy Zeiss LSM 700. For the classification, we evaluated two classifiers: Artificial Neural Networks and Support Vector Machine. The performance evaluation was made by extracting four marginal relations from the confusion matrix: accuracy, sensitivity, specificity, and precision, plus three evaluation methods: Receiver Operating Characteristic space, the Individual Classification Success Index, and the Jaccard’s coefficient. Despite the classification percentage obtained by an expert is better than the one obtained with the algorithm, the algorithm achieves a classification percentage near or exceeding the 60 % accuracy for the analyzed failure modes. The results presented here provide a good approach to address future research on texture analysis using 3D data.

  12. 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach.

    Science.gov (United States)

    Vlaminck, Michiel; Luong, Hiep; Goeman, Werner; Philips, Wilfried

    2016-11-16

    In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind, and hence, it does not make any assumption about the scene or about the sensor set-up. The main novelty of this work is that the proposed LiDAR mapping approach deals explicitly with the inhomogeneous density of point clouds produced by LiDAR scanners. To this end, we keep track of a global 3D map of the environment, which is continuously improved and refined by means of a surface reconstruction technique. Moreover, we perform surface analysis on consecutive generated point clouds in order to assure a perfect alignment with the global 3D map. In order to cope with drift, the system incorporates loop closure by determining the pose error and propagating it back in the pose graph. Our algorithm was exhaustively tested on data captured at a conference building, a university campus and an industrial site of a chemical company. Experiments demonstrate that it is capable of generating highly accurate 3D maps in very challenging environments. We can state that the average distance of corresponding point pairs between the ground truth and estimated point cloud approximates one centimeter for an area covering approximately 4000 m 2 . To prove the genericity of the system, it was tested on the well-known Kitti vision benchmark. The results show that our approach competes with state of the art methods without making any additional assumptions.

  13. 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach

    Directory of Open Access Journals (Sweden)

    Michiel Vlaminck

    2016-11-01

    Full Text Available In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind, and hence, it does not make any assumption about the scene or about the sensor set-up. The main novelty of this work is that the proposed LiDAR mapping approach deals explicitly with the inhomogeneous density of point clouds produced by LiDAR scanners. To this end, we keep track of a global 3D map of the environment, which is continuously improved and refined by means of a surface reconstruction technique. Moreover, we perform surface analysis on consecutive generated point clouds in order to assure a perfect alignment with the global 3D map. In order to cope with drift, the system incorporates loop closure by determining the pose error and propagating it back in the pose graph. Our algorithm was exhaustively tested on data captured at a conference building, a university campus and an industrial site of a chemical company. Experiments demonstrate that it is capable of generating highly accurate 3D maps in very challenging environments. We can state that the average distance of corresponding point pairs between the ground truth and estimated point cloud approximates one centimeter for an area covering approximately 4000 m 2 . To prove the genericity of the system, it was tested on the well-known Kitti vision benchmark. The results show that our approach competes with state of the art methods without making any additional assumptions.

  14. RBF-Based Monocular Vision Navigation for Small Vehicles in Narrow Space below Maize Canopy

    Directory of Open Access Journals (Sweden)

    Lu Liu

    2016-06-01

    Full Text Available Maize is one of the major food crops in China. Traditionally, field operations are done by manual labor, where the farmers are threatened by the harsh environment and pesticides. On the other hand, it is difficult for large machinery to maneuver in the field due to limited space, particularly in the middle and late growth stage of maize. Unmanned, compact agricultural machines, therefore, are ideal for such field work. This paper describes a method of monocular visual recognition to navigate small vehicles between narrow crop rows. Edge detection and noise elimination were used for image segmentation to extract the stalks in the image. The stalk coordinates define passable boundaries, and a simplified radial basis function (RBF-based algorithm was adapted for path planning to improve the fault tolerance of stalk coordinate extraction. The average image processing time, including network latency, is 220 ms. The average time consumption for path planning is 30 ms. The fast processing ensures a top speed of 2 m/s for our prototype vehicle. When operating at the normal speed (0.7 m/s, the rate of collision with stalks is under 6.4%. Additional simulations and field tests further proved the feasibility and fault tolerance of our method.

  15. 3D Vision Based Landing Control of a Small Scale Autonomous Helicopter

    Directory of Open Access Journals (Sweden)

    Zhenyu Yu

    2007-03-01

    Full Text Available Autonomous landing is a challenging but important task for Unmanned Aerial Vehicles (UAV to achieve high level of autonomy. The fundamental requirement for landing is the knowledge of the height above the ground, and a properly designed controller to govern the process. This paper presents our research results in the study of landing an autonomous helicopter. The above-the-ground height sensing is based on a 3D vision system. We have designed a simple plane-fitting method for estimating the height over the ground. The method enables vibration free measurement with the camera rigidly attached on the helicopter without using complicated gimbal or active vision mechanism. The estimated height is used by the landing control loop. Considering the ground effect during landing, we have proposed a two-stage landing procedure. Two controllers are designed for the two landing stages respectively. The sensing approach and control strategy has been verified in field flight test and has demonstrated satisfactory performance.

  16. Localization of significant 3D objects in 2D images for generic vision tasks

    Science.gov (United States)

    Mokhtari, Marielle; Bergevin, Robert

    1995-10-01

    Computer vision experiments are not very often linked to practical applications but rather deal with typical laboratory experiments under controlled conditions. For instance, most object recognition experiments are based on specific models used under limitative constraints. Our work proposes a general framework for rapidly locating significant 3D objects in 2D static images of medium to high complexity, as a prerequisite step to recognition and interpretation when no a priori knowledge of the contents of the scene is assumed. In this paper, a definition of generic objects is proposed, covering the structures that are implied in the image. Under this framework, it must be possible to locate generic objects and assign a significance figure to each one from any image fed to the system. The most significant structure in a given image becomes the focus of interest of the system determining subsequent tasks (like subsequent robot moves, image acquisitions and processing). A survey of existing strategies for locating 3D objects in 2D images is first presented and our approach is defined relative to these strategies. Perceptual grouping paradigms leading to the structural organization of the components of an image are at the core of our approach.

  17. A flexible 3D vision system based on structured light for in-line product inspection

    Science.gov (United States)

    Skotheim, Øystein; Nygaard, Jens Olav; Thielemann, Jens; Vollset, Thor

    2008-02-01

    A flexible and highly configurable 3D vision system targeted for in-line product inspection is presented. The system includes a low cost 3D camera based on structured light and a set of flexible software tools that automate the measurement process. The specification of the measurement tasks is done in a first manual step. The user selects regions of the point cloud to analyze and specifies primitives to be characterized within these regions. After all measurement tasks have been specified, measurements can be carried out on successive parts automatically and without supervision. As a test case, a measurement cell for inspection of a V-shaped car component has been developed. The car component consists of two steel tubes attached to a central hub. Each of the tubes has an additional bushing clamped to its end. A measurement is performed in a few seconds and results in an ordered point cloud with 1.2 million points. The software is configured to fit cylinders to each of the steel tubes as well as to the inside of the bushings of the car part. The size, position and orientation of the fitted cylinders allow us to measure and verify a series of dimensions specified on the CAD drawing of the component with sub-millimetre accuracy.

  18. Monocular and binocular development in children with albinism, infantile nystagmus syndrome and normal vision

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.

    2013-01-01

    Background/aims: To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (nn=n16), children with infantile nystagmus syndrome (nn=n10), and children with normal vision (nn=n72). Methods: Interocular acuity differences and

  19. Monocular and binocular development in children with albinism, infantile nystagmus syndrome, and normal vision

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.

    2013-01-01

    Abstract Background/aims: To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Methods: Interocular acuity differe

  20. Monocular and binocular development in children with albinism, infantile nystagmus syndrome and normal vision

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.

    2013-01-01

    Background/aims: To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (nn=n16), children with infantile nystagmus syndrome (nn=n10), and children with normal vision (nn=n72). Methods: Interocular acuity differences and

  1. Monocular and binocular development in children with albinism, infantile nystagmus syndrome, and normal vision

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.

    2013-01-01

    Abstract Background/aims: To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Methods: Interocular acuity

  2. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Directory of Open Access Journals (Sweden)

    Shanis Barnard

    Full Text Available Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is

  3. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Science.gov (United States)

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  4. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    Science.gov (United States)

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  5. Autonomous Landing and Ingress of Micro-Air-Vehicles in Urban Environments Based on Monocular Vision

    Science.gov (United States)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-01-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  6. A Re-Evaluation of Achromatic Spatiotemporal Vision: Nonoriented Filters are Monocular, they Adapt and Can be Used for Decision-Making at High Flicker Speeds

    Directory of Open Access Journals (Sweden)

    Tim S. Meese

    2011-05-01

    Full Text Available Masking, adaptation, and summation paradigms have been used to investigate the characteristics of early spatiotemporal vision. Each has been taken to provide evidence for (i oriented and (ii nonoriented spatial filtering mechanisms. However, subsequent findings suggest that the evidence for nonoriented mechanisms has been misinterpreted: possibly, those experiments revealed the characteristics of suppression (e.g., gain control not excitation, or merely the isotropic subunits of the oriented detecting-mechanisms. To shed light on this, we used all three paradigms to focus on the “high-speed” corner of spatiotemporal vision (low spatial frequency, high temporal frequency where cross-oriented achromatic effects are greatest. We used flickering Gabor patches as targets and a 2IFC procedure for monocular, binocular and dichoptic stimulus presentations. To account for our results we devised a simple model involving an isotropic monocular filter-stage feeding orientation-tuned binocular filters. Both filter stages are adaptable and their outputs are available to the decision-stage following nonlinear contrast transduction. However, the monocular isotropic filters adapt only to high-speed stimuli—consistent with a magnocellular sub-cortical substrate—and benefit decision making only for high-speed stimuli. According to this model, the visual processes revealed by masking, adaptation and summation are related but not identical.

  7. Optic flow-based vision system for autonomous 3D localization and control of small aerial vehicles

    OpenAIRE

    Kendoul, Farid; Fantoni, Isabelle; Nonami, Kenzo

    2009-01-01

    International audience; The problem considered in this paper involves the design of a vision-based autopilot for small and micro Unmanned Aerial Vehicles (UAVs). The proposed autopilot is based on an optic flow-based vision system for autonomous localization and scene mapping, and a nonlinear control system for flight control and guidance. This paper focusses on the development of a real-time 3D vision algorithm for estimating optic flow, aircraft self-motion and depth map, using a low-resolu...

  8. Monocular vision for intelligent wheelchair indoor navigation based on natural landmark matching

    Science.gov (United States)

    Xu, Xiaodong; Luo, Yuan; Kong, Weixi

    2010-08-01

    This paper presents a real-time navigation system in a behavior-based manner. We show that autonomous navigation is possible in different rooms with the use of a single camera and natural landmarks. Firstly the intelligent wheelchair is manually guided on a path passing through different rooms and a video sequence is recorded with a front-facing camera. A 3D structure map is then gotten from this learning sequence by calculating the natural landmarks. Finally, the intelligent wheelchair uses this map to compute its localization and it follows the learning path or a slightly different path to achieve the real-time navigation. Experimental results indicate that this method is effective even when the viewpoint and scale is changed.

  9. UAV and Computer Vision in 3D Modeling of Cultural Heritage in Southern Italy

    Science.gov (United States)

    Barrile, Vincenzo; Gelsomino, Vincenzo; Bilotta, Giuliana

    2017-08-01

    On the Waterfront Italo Falcomatà of Reggio Calabria you can admire the most extensive tract of the walls of the Hellenistic period of ancient city of Rhegion. The so-called Greek Walls are one of the most significant and visible traces of the past linked to the culture of Ancient Greece in the site of Reggio Calabria territory. Over the years this stretch of wall has always been a part, to the reconstruction of Reggio after the earthquake of 1783, the outer walls at all times, restored countless times, to cope with the degradation of the time and the adjustments to the technical increasingly innovative and sophisticated siege. They were the subject of several studies on history, for the study of the construction techniques and the maintenance and restoration of the same. This note describes the methodology for the implementation of a three-dimensional model of the Greek Walls conducted by the Geomatics Laboratory, belonging to DICEAM Department of University “Mediterranea” of Reggio Calabria. 3D modeling we made is based on imaging techniques, such as Digital Photogrammetry and Computer Vision, by using a drone. The acquired digital images were then processed using commercial software Agisoft PhotoScan. The results denote the goodness of the technique used in the field of cultural heritage, attractive alternative to more expensive and demanding techniques such as laser scanning.

  10. Development of a stereo vision measurement system for a 3D three-axial pneumatic parallel mechanism robot arm.

    Science.gov (United States)

    Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun

    2011-01-01

    In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  11. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    Science.gov (United States)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  12. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    Science.gov (United States)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  13. Precision calibration method for binocular vision measurement systems based on arbitrary translations and 3D-connection information

    Science.gov (United States)

    Yang, Jinghao; Jia, Zhenyuan; Liu, Wei; Fan, Chaonan; Xu, Pengtao; Wang, Fuji; Liu, Yang

    2016-10-01

    Binocular vision systems play an important role in computer vision, and high-precision system calibration is a necessary and indispensable process. In this paper, an improved calibration method for binocular stereo vision measurement systems based on arbitrary translations and 3D-connection information is proposed. First, a new method for calibrating the intrinsic parameters of binocular vision system based on two translations with an arbitrary angle difference is presented, which reduces the effect of the deviation of the motion actuator on calibration accuracy. This method is simpler and more accurate than existing active-vision calibration methods and can provide a better initial value for the determination of extrinsic parameters. Second, a 3D-connection calibration and optimization method is developed that links the information of the calibration target in different positions, further improving the accuracy of the system calibration. Calibration experiments show that the calibration error can be reduced to 0.09%, outperforming traditional methods for the experiments of this study.

  14. Evaluation of Binocular Vision Therapy Efficacy by 3D Video-Oculography Measurement of Binocular Alignment and Motility

    OpenAIRE

    Laria Ochaíta, Carlos; Piñero Llorens, David Pablo

    2013-01-01

    Objective: To evaluate two cases of intermittent exotropia (IX(T)) treated by vision therapy the efficacy of the treatment by complementing the clinical examination with a 3-D video-oculography to register and to evidence the potential applicability of this technology for such purpose. Methods: We report the binocular alignment changes occurring after vision therapy in a woman of 36 years with an IX(T) of 25 prism diopters (Δ) at far and 18 Δ at near and a child of 10 years with 8 Δ of IX(T) ...

  15. 3D VISION-BASED DIETARY INSPECTION FOR THE CENTRAL KITCHEN AUTOMATION

    National Research Council Canada - National Science Library

    Yue-Min Jiang; Ho-Hsin Lee; Cheng-Chang Lien; Chun-Feng Tai; PiChun Chu; Ting-Wei Yang

    2014-01-01

    .... In the proposed system, firstly, the meal box can be detected and located automatically with the vision-based method and then all the food ingredients can be identified by using the color and LBP-HF texture features...

  16. Flight-appropriate 3D Terrain-rendering Toolkit for Synthetic Vision Project

    Data.gov (United States)

    National Aeronautics and Space Administration — TerraMetrics proposes an SBIR Phase I R/R&D effort to develop a key 3D terrain-rendering technology that provides the basis for successful commercial deployment...

  17. Flight-appropriate 3D Terrain-rendering Toolkit for Synthetic Vision Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The TerraBlocksTM 3D terrain data format and terrain-block-rendering methodology provides an enabling basis for successful commercial deployment of...

  18. Zero Calibration of Delta Robot Based on Monocular Vision%基于单目视觉的 Delta 机器人零点标定方法

    Institute of Scientific and Technical Information of China (English)

    孙月海; 王兰; 梅江平; 张文昌; 刘艺

    2013-01-01

    For the precision of high-speed pick-and-place parallel robot with lower-mobility in practical engineering, a fast calibration approach was proposed based on vision metrology in this paper. To specify this method,by means of system analysis and reasonable mechanism simplification of Delta robot,a zero error model was established. A zero error identification model using monocular vision was constructed in plane measurement. The zero error could be identified only measuring the positional error of end-effector in x axis and y axis by monocular vision,when the mo-bile platform was in horizontal motion. The error compensation was realized by modifying the ideal zero point position of the system. Calibration experiment results show that the method is simple,effective and strongly practical.%  针对实际工程应用中少自由度高速抓放并联机器人的精度问题,提出了一种基于视觉测量的快速标定方法。以 Delta 机器人为例,通过系统分析和机构合理简化,建立了零点误差模型。构造出基于单目视觉平面测量的零点误差辨识模型,借助单目视觉仅检测机器人动平台沿水平面运动时末端 x 、 y 向的位置误差,识别出零点误差,进而修改零点位置实现末端位置误差补偿。标定实验结果表明该方法简单、有效、实用性强。

  19. Surface formation and depth in monocular scene perception.

    Science.gov (United States)

    Albert, M K

    1999-01-01

    The visual perception of monocular stimuli perceived as 3-D objects has received considerable attention from researchers in human and machine vision. However, most previous research has focused on how individual 3-D objects are perceived. Here this is extended to a study of how the structure of 3-D scenes containing multiple, possibly disconnected objects and features is perceived. Da Vinci stereopsis, stereo capture, and other surface formation and interpolation phenomena in stereopsis and structure-from-motion suggest that small features having ambiguous depth may be assigned depth by interpolation with features having unambiguous depth. I investigated whether vision may use similar mechanisms to assign relative depth to multiple objects and features in sparse monocular images, such as line drawings, especially when other depth cues are absent. I propose that vision tends to organize disconnected objects and features into common surfaces to construct 3-D-scene interpretations. Interpolations that are too weak to generate a visible surface percept may still be strong enough to assign relative depth to objects within a scene. When there exists more than one possible surface interpolation in a scene, the visual system's preference for one interpolation over another seems to be influenced by a number of factors, including: (i) proximity, (ii) smoothness, (iii) a preference for roughly frontoparallel surfaces and 'ground' surfaces, (iv) attention and fixation, and (v) higher-level factors. I present a variety of demonstrations and an experiment to support this surface-formation hypothesis.

  20. COST-EFFECTIVE STEREO VISION SYSTEM FOR MOBILE ROBOT NAVIGATION AND 3D MAP RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    Arjun B Krishnan

    2014-07-01

    Full Text Available The key component of a mobile robot system is the ability to localize itself accurately in an unknown environment and simultaneously build the map of the environment. Majority of the existing navigation systems are based on laser range finders, sonar sensors or artificial landmarks. Navigation systems using stereo vision are rapidly developing technique in the field of autonomous mobile robots. But they are less advisable in replacing the conventional approaches to build small scale autonomous robot because of their high implementation cost. This paper describes an experimental approach to build a cost- effective stereo vision system for autonomous mobile robots that avoid obstacles and navigate through indoor environments. The mechanical as well as the programming aspects of stereo vision system are documented in this paper. Stereo vision system adjunctively with ultrasound sensors was implemented on the mobile robot, which successfully navigated through different types of cluttered environments with static and dynamic obstacles. The robot was able to create two dimensional topological maps of unknown environments using the sensor data and three dimensional model of the same using stereo vision system.

  1. Bayesian depth estimation from monocular natural images.

    Science.gov (United States)

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2017-05-01

    Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.

  2. Analysis of 3-D images of dental imprints using computer vision

    Science.gov (United States)

    Aubin, Michele; Cote, Jean; Laurendeau, Denis; Poussart, Denis

    1992-05-01

    This paper addressed two important aspects of dental analysis: (1) location and (2) identification of the types of teeth by means of 3-D image acquisition and segmentation. The 3-D images of both maxillaries are acquired using a wax wafer as support. The interstices between teeth are detected by non-linear filtering of the 3-D and grey-level data. Two operators are presented: one for the detection of the interstices between incisors, canines, and premolars and one for those between molars. Teeth are then identified by mapping the imprint under analysis on the computer model of an 'ideal' imprint. For the mapping to be valid, a set of three reference points is detected on the imprint. Then, the points are put in correspondence with similar points on the model. Two such points are chosen based on a least-squares fit of a second-order polynomial of the 3-D data in the area of canines. This area is of particular interest since the canines show a very characteristic shape and are easily detected on the imprint. The mapping technique is described in detail in the paper as well as pre-processing of the 3-D profiles. Experimental results are presented for different imprints.

  3. Capturing age-related changes in functional contrast sensitivity with decreasing light levels in monocular and binocular vision

    OpenAIRE

    Gillespie-Gallery, H.; Konstantakopoulou, E.; HARLOW, J.A.; Barbur, J. L.

    2013-01-01

    Purpose: It is challenging to separate the effects of normal aging of the retina and visual pathways independently from optical factors, decreased retinal illuminance and early stage disease. This study determined limits to describe the effect of light level on normal, age-related changes in monocular and binocular functional contrast sensitivity. Methods: 95 participants aged 20 to 85 were recruited. Contrast thresholds for correct orientation discrimination of the gap in a Landolt C opt...

  4. Target detect system in 3D using vision apply on plant reproduction by tissue culture

    Science.gov (United States)

    Vazquez Rueda, Martin G.; Hahn, Federico

    2001-03-01

    This paper presents the preliminary results for a system in tree dimension that use a system vision to manipulate plants in a tissue culture process. The system is able to estimate the position of the plant in the work area, first calculate the position and send information to the mechanical system, and recalculate the position again, and if it is necessary, repositioning the mechanical system, using an neural system to improve the location of the plant. The system use only the system vision to sense the position and control loop using a neural system to detect the target and positioning the mechanical system, the results are compared with an open loop system.

  5. Making Things See 3D vision with Kinect, Processing, Arduino, and MakerBot

    CERN Document Server

    Borenstein, Greg

    2012-01-01

    This detailed, hands-on guide provides the technical and conceptual information you need to build cool applications with Microsoft's Kinect, the amazing motion-sensing device that enables computers to see. Through half a dozen meaty projects, you'll learn how to create gestural interfaces for software, use motion capture for easy 3D character animation, 3D scanning for custom fabrication, and many other applications. Perfect for hobbyists, makers, artists, and gamers, Making Things See shows you how to build every project with inexpensive off-the-shelf components, including the open source P

  6. Stereo Vision and 3D Reconstruction on a Distributed Memory System

    NARCIS (Netherlands)

    Kuijpers, N.H.L.; Paar, G.; Lukkien, J.J.

    1996-01-01

    An important research topic in image processing is stereo vision. The objective is to compute a 3-dimensional representation of some scenery from two 2-dimensional digital images. Constructing a 3-dimensional representation involves finding pairs of pixels from the two images which correspond to the

  7. Stereo Vision and 3D Reconstruction on a Distributed Memory System

    NARCIS (Netherlands)

    Kuijpers, N.H.L.; Paar, G.; Lukkien, J.J.

    1996-01-01

    An important research topic in image processing is stereo vision. The objective is to compute a 3-dimensional representation of some scenery from two 2-dimensional digital images. Constructing a 3-dimensional representation involves finding pairs of pixels from the two images which correspond to the

  8. Vision-Based 3D Motion Estimation for On-Orbit Proximity Satellite Tracking and Navigation

    Science.gov (United States)

    2015-06-01

    Network .....................................................................................58 3. Telemetry Computer...screenshot of the telemetry software and the SSH terminals. ...........61 Figure 25. View of the VICON cameras above the granite flat floor of the FSS...point-wise kinematic models. The pose of the 3D structure is then estimated using a dual quaternion method [19]. The robustness and validity of this

  9. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    Science.gov (United States)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  10. Recognition of Multifunctional Coded Target in Single-camera Mobile 3D-vision Coordination Measurement

    Institute of Scientific and Technical Information of China (English)

    XU Zhi-hua; XIA Ling-li; YU Zhi-jing

    2009-01-01

    Single-camera mobile-vision coordinate measurement is one of the primary methods of 3Dcoordinate vision measurement,and coded target plays an important role in this system. A muhifunctional coded target and its recognition algorithm is developed,which can realize automatic match of feature points,calculation of camera initial exterior orientation and apace scale factor constraint in measurement system.The uniqueness and scalability of coding are guaranteed by the rational arrangement of code bits.The recognition of coded targets is realized by cross-ratio invariance restriction,space coordinates transform of feature points based on spacial pose estimation algorithm,recognition of code bits and computation of coding values.The experiment results demonstrate the uniqueness of the coding form and the reliability of recognition.

  11. Towards a Vision Algorithm Compiler for Recognition of Partially Occluded 3-D Objects

    Science.gov (United States)

    1992-11-20

    0 2000 4000 (b) Actual Arem of (b) Mocle Surface Figure 5: Example distributions of a given feature value (area) over a model face. The distributions...and Paradigms, pages 564-584. Morgan Kaufmann, 1987. [GH91] W. Eric L. Grimson and Daniel P. Huttenlocher. On the verification of hy- pothesized...of Computer Vision and Pattern Recognition, pages 541-548, 1989. 51 [Hut88] Daniel P. Huttenlocher. Three-Dimensional Recognition of Solid Objects from

  12. 3D Vision by Using Calibration Pattern with Inertial Sensor and RBF Neural Networks.

    Science.gov (United States)

    Beṣdok, Erkan

    2009-01-01

    Camera calibration is a crucial prerequisite for the retrieval of metric information from images. The problem of camera calibration is the computation of camera intrinsic parameters (i.e., coefficients of geometric distortions, principle distance and principle point) and extrinsic parameters (i.e., 3D spatial orientations: ω, ϕ, κ, and 3D spatial translations: t(x), t(y), t(z)). The intrinsic camera calibration (i.e., interior orientation) models the imaging system of camera optics, while the extrinsic camera calibration (i.e., exterior orientation) indicates the translation and the orientation of the camera with respect to the global coordinate system. Traditional camera calibration techniques require a predefined mathematical-camera model and they use prior knowledge of many parameters. Definition of a realistic camera model is quite difficult and computation of camera calibration parameters are error-prone. In this paper, a novel implicit camera calibration method based on Radial Basis Functions Neural Networks is proposed. The proposed method requires neither an exactly defined camera model nor any prior knowledge about the imaging-setup or classical camera calibration parameters. The proposed method uses a calibration grid-pattern rotated around a static-fixed axis. The rotations of the calibration grid-pattern have been acquired by using an Xsens MTi-9 inertial sensor and in order to evaluate the success of the proposed method, 3D reconstruction performance of the proposed method has been compared with the performance of a traditional camera calibration method, Modified Direct Linear Transformation (MDLT). Extensive simulation results show that the proposed method achieves a better performance than MDLT aspect of 3D reconstruction.

  13. 3D Vision by Using Calibration Pattern with Inertial Sensor and RBF Neural Networks

    Directory of Open Access Journals (Sweden)

    Erkan Beşdok

    2009-06-01

    Full Text Available Camera calibration is a crucial prerequisite for the retrieval of metric information from images. The problem of camera calibration is the computation of camera intrinsic parameters (i.e., coefficients of geometric distortions, principle distance and principle point and extrinsic parameters (i.e., 3D spatial orientations: ω, φ, κ, and 3D spatial translations: tx, ty, tz. The intrinsic camera calibration (i.e., interior orientation models the imaging system of camera optics, while the extrinsic camera calibration (i.e., exterior orientation indicates the translation and the orientation of the camera with respect to the global coordinate system. Traditional camera calibration techniques require a predefined mathematical-camera model and they use prior knowledge of many parameters. Definition of a realistic camera model is quite difficult and computation of camera calibration parameters are error-prone. In this paper, a novel implicit camera calibration method based on Radial Basis Functions Neural Networks is proposed. The proposed method requires neither an exactly defined camera model nor any prior knowledge about the imaging-setup or classical camera calibration parameters. The proposed method uses a calibration grid-pattern rotated around a static-fixed axis. The rotations of the calibration grid-pattern have been acquired by using an Xsens MTi-9 inertial sensor and in order to evaluate the success of the proposed method, 3D reconstruction performance of the proposed method has been compared with the performance of a traditional camera calibration method, Modified Direct Linear Transformation (MDLT. Extensive simulation results show that the proposed method achieves a better performance than MDLT aspect of 3D reconstruction.

  14. A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    OpenAIRE

    Sobers Lourdu Xavier Francis; Sreenatha G. Anavatti; Matthew Garratt; Hyunbgo Shim

    2015-01-01

    The aim of this paper is to deploy a time-of-flight (ToF) based photonic mixer device (PMD) camera on an Autonomous Ground Vehicle (AGV) whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D) information at a low computationa...

  15. Development of a system based on 3D vision, interactive virtual environments, ergonometric signals and a humanoid for stroke rehabilitation.

    Science.gov (United States)

    Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra

    2013-11-01

    This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient.

  16. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot

    Directory of Open Access Journals (Sweden)

    Xun Chai

    2015-04-01

    Full Text Available Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called “Octopus”, which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  17. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot.

    Science.gov (United States)

    Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin

    2015-04-22

    Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called "Octopus", which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  18. Artificial Vision in 3D Perspective. For Object Detection On Planes, Using Points Clouds.

    Directory of Open Access Journals (Sweden)

    Catalina Alejandra Vázquez Rodriguez

    2014-02-01

    Full Text Available In this paper, we talk about an algorithm of artificial vision for the robot Golem - II + with which to analyze the environment the robot, for the detection of planes and objects in the scene through point clouds, which were captured with kinect device, possible objects and quantity, distance and other characteristics. Subsequently the "clusters" are grouped to identify whether they are located on the same surface, in order to calculate the distance and the slope of the planes relative to the robot, and finally each object separately analyzed to see if it is possible to take them, if they are empty surfaces, may leave objects on them, long as feasible considering a distance, ignoring false positives as the walls and floor, which for these purposes are not of interest since it is not possible to place objects on the walls and floor are out of range of the robot's arms.

  19. 3D vision based on PMD-technology for mobile robots

    Science.gov (United States)

    Roth, Hubert J.; Schwarte, Rudolf; Ruangpayoongsak, Niramon; Kuhle, Joerg; Albrecht, Martin; Grothof, Markus; Hess, Holger

    2003-09-01

    A series of micro-robots (MERLIN: Mobile Experimental Robots for Locomotion and Intelligent Navigation) has been designed and implemented for a broad spectrum of indoor and outdoor tasks on basis of standardized functional modules like sensors, actuators, communication by radio link. The sensors onboard on the MERLIN robot can be divided into two categories: internal sensors for low-level control and for measuring the state of the robot and external sensors for obstacle detection, modeling of the environment and position estimation and navigation of the robot in a global co-ordinate system. The special emphasis of this paper is to describe the capabilities of MERLIN for obstacle detection, targets detection and for distance measurement. Besides ultrasonic sensors a new camera based on PMD-technology is used. This Photonic Mixer Device (PMD) represents a new electro-optic device that provides a smart interface between the world of incoherent optical signals and the world of their electronic signal processing. This PMD-technology directly enables 3D-imaging by means of the time-of-flight (TOF) principle. It offers an extremely high potential for new solutions in the robotics application field. The PMD-Technology opens up amazing new perspectives for obstacle detection systems, target acquisition as well as mapping of unknown environments.

  20. Multi-Camera and Structured-Light Vision System (MSVS for Dynamic High-Accuracy 3D Measurements of Railway Tunnels

    Directory of Open Access Journals (Sweden)

    Dong Zhan

    2015-04-01

    Full Text Available Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS. First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  1. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    Science.gov (United States)

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-04-14

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  2. A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera's performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  3. A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing, and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera’s performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  4. Visualization of the 3-D topography of the optic nerve head through a passive stereo vision model

    Science.gov (United States)

    Ramirez, Juan M.; Mitra, Sunanda; Morales, Jose

    1999-01-01

    This paper describes a system for surface recovery and visualization of the 3D topography of the optic nerve head, as support of early diagnosis and follow up to glaucoma. In stereo vision, depth information is obtained from triangulation of corresponding points in a pair of stereo images. In this paper, the use of the cepstrum transformation as a disparity measurement technique between corresponding windows of different block sizes is described. This measurement process is embedded within a coarse-to-fine depth-from-stereo algorithm, providing an initial range map with the depth information encoded as gray levels. These sparse depth data are processed through a cubic B-spline interpolation technique in order to obtain a smoother representation. This methodology is being especially refined to be used with medical images for clinical evaluation of some eye diseases such as open angle glaucoma, and is currently under testing for clinical evaluation and analysis of reproducibility and accuracy.

  5. Design of the Surgical Navigation Based on Monocular Vision%单目视觉手术导航的系统设计

    Institute of Scientific and Technical Information of China (English)

    刘大鹏; 张巍; 徐子昂

    2016-01-01

    Objective: Existing orthopedic surgical navigation system makes surgery accurate and intraoperative X-ray exposure reduce to the traditional surgery, but the apparatus body is large and operation complicate, difficult to effectively shorten the operation time. This paper introduces a monocular vision navigation system to solve this problem. Methods: Monocular vision navigation using visible light image processing system, and set the overall hardware platform based on validated algorithms and designs used for knee replacement surgery procedures. Result & Conclusion: Relative to the previous method of non-contact dimensional localization, our system can keep the accuracy while reducing the hardware volume and simplifying the navigation process, also has features such as iterative development, low cost, particularly suitable for medium and small orthopaedics surgery.%目的:现有的骨科手术导航系统在提高手术精度和减少术中X线暴露方面具有传统手术无法比拟的优势,但设备体较大,操作繁琐,难以有效缩短手术时间。因此,介绍一种利用可见光的单目视觉导航系统解决此问题。方法:采用可见光的单目视觉作为手术导航的图像处理系统,并在此基础上设定整体硬件平台,验证相关算法,并设计了针对膝关节置换手术的使用操作流程。结果及结论:相对以往的非接触式立体定位方法,本系统在保证精度的同时减小设备体积,简化导航流程,兼具可重复开发、成本低廉等特性,适用于中小型骨科手术。

  6. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  7. Vision-based building energy diagnostics and retrofit analysis using 3D thermography and building information modeling

    Science.gov (United States)

    Ham, Youngjib

    localization issues of 2D thermal image-based inspection, a new computer vision-based method is presented for automated 3D spatio-thermal modeling of building environments from images and localizing the thermal images into the 3D reconstructed scenes, which helps better characterize the as-is condition of existing buildings in 3D. By using these models, auditors can conduct virtual walk-through in buildings and explore the as-is condition of building geometry and the associated thermal conditions in 3D. Second, to address the challenges in qualitative and subjective interpretation of visual data, a new model-based method is presented to convert the 3D thermal profiles of building environments into their associated energy performance metrics. More specifically, the Energy Performance Augmented Reality (EPAR) models are formed which integrate the actual 3D spatio-thermal models ('as-is') with energy performance benchmarks ('as-designed') in 3D. In the EPAR models, the presence and location of potential energy problems in building environments are inferred based on performance deviations. The as-is thermal resistances of the building assemblies are also calculated at the level of mesh vertex in 3D. Then, based on the historical weather data reflecting energy load for space conditioning, the amount of heat transfer that can be saved by improving the as-is thermal resistances of the defective areas to the recommended level is calculated, and the equivalent energy cost for this saving is estimated. The outcome provides building practitioners with unique information that can facilitate energy efficient retrofit decision-makings. This is a major departure from offhand calculations that are based on historical cost data of industry best practices. Finally, to improve the reliability of BIM-based energy performance modeling and analysis for existing buildings, a new model-based automated method is presented to map actual thermal resistance measurements at the level of 3D vertexes to the

  8. Development of a 3D Parallel Mechanism Robot Arm with Three Vertical-Axial Pneumatic Actuators Combined with a Stereo Vision System

    Directory of Open Access Journals (Sweden)

    Hao-Ting Lin

    2011-12-01

    Full Text Available This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot’s end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H∞ tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end

  9. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    Science.gov (United States)

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  10. 基于单目视觉的纵向车间距检测研究%Research on Detection of Longitudinal Vehicle Spacing Based on Monocular Vision

    Institute of Scientific and Technical Information of China (English)

    杨炜; 魏朗; 巩建强; 张倩

    2012-01-01

    提出了一种在结构化公路上基于单目视觉的纵向车间距的检测方法;利用Hough变换识别两侧车道标识线,确定前方车辆识别区域,检测并跟踪本车道内的前方车辆,在传统的静态单帧图像测距模型的基础上,建立了一种改进的静态单帧图像测距模型,并实现了纵向车间距的测量;实验结果表明,该方法能够实时识别跟踪前方车辆,准确检测纵向车间距,其测量值与真实测量值相比较,误差比较小,测量精度较为准确,完全能够满足实际测距要求,是一种非常有效的纵向车间距检测方法,具有较强的通用性.%A new detection method of longitudinal vehicle spacing based on monocular vision is proposed. Using Hough transform recognition on both sides of driveway logo lane, determine leading vehicle identification area, detection and tracking the front vehicle in the lane, on the basis of traditional static single frame image ranging model, establishing a modified static single frame image ranging model ?finished the detection of longitudinal vehicle spacing. The experimental results show that this method could real-time identification leading vehicle, accurately detection of longitudinal vehicle spacing, the measured value compared with the real value measurement, the error are small, measurement accuracy is more accurate, could meet the practical needs, is a kind of effective longitudinal vehicle spacing detection method, has strong generality.

  11. Monocular indoor localization techniques for smartphones

    Directory of Open Access Journals (Sweden)

    Hollósi Gergely

    2016-12-01

    Full Text Available In the last decade huge research work has been put to the indoor visual localization of personal smartphones. Considering the available sensor capabilities monocular odometry provides promising solution, even reecting requirements of augmented reality applications. This paper is aimed to give an overview of state-of-the-art results regarding monocular visual localization. For this purpose essential basics of computer vision are presented and the most promising solutions are reviewed.

  12. Airport databases for 3D synthetic-vision flight-guidance displays: database design, quality assessment, and data generation

    Science.gov (United States)

    Friedrich, Axel; Raabe, Helmut; Schiefele, Jens; Doerr, Kai Uwe

    1999-07-01

    In future aircraft cockpit designs SVS (Synthetic Vision System) databases will be used to display 3D physical and virtual information to pilots. In contrast to pure warning systems (TAWS, MSAW, EGPWS) SVS serve to enhance pilot spatial awareness by 3-dimensional perspective views of the objects in the environment. Therefore all kind of aeronautical relevant data has to be integrated into the SVS-database: Navigation- data, terrain-data, obstacles and airport-Data. For the integration of all these data the concept of a GIS (Geographical Information System) based HQDB (High-Quality- Database) has been created at the TUD (Technical University Darmstadt). To enable database certification, quality- assessment procedures according to ICAO Annex 4, 11, 14 and 15 and RTCA DO-200A/EUROCAE ED76 were established in the concept. They can be differentiated in object-related quality- assessment-methods following the keywords accuracy, resolution, timeliness, traceability, assurance-level, completeness, format and GIS-related quality assessment methods with the keywords system-tolerances, logical consistence and visual quality assessment. An airport database is integrated in the concept as part of the High-Quality- Database. The contents of the HQDB are chosen so that they support both Flight-Guidance-SVS and other aeronautical applications like SMGCS (Surface Movement and Guidance Systems) and flight simulation as well. Most airport data are not available. Even though data for runways, threshold, taxilines and parking positions were to be generated by the end of 1997 (ICAO Annex 11 and 15) only a few countries fulfilled these requirements. For that reason methods of creating and certifying airport data have to be found. Remote sensing and digital photogrammetry serve as means to acquire large amounts of airport objects with high spatial resolution and accuracy in much shorter time than with classical surveying methods. Remotely sensed images can be acquired from satellite

  13. Monocular visual scene understanding: understanding multi-object traffic scenes.

    Science.gov (United States)

    Wojek, Christian; Walk, Stefan; Roth, Stefan; Schindler, Konrad; Schiele, Bernt

    2013-04-01

    Following recent advances in detection, context modeling, and tracking, scene understanding has been the focus of renewed interest in computer vision research. This paper presents a novel probabilistic 3D scene model that integrates state-of-the-art multiclass object detection, object tracking and scene labeling together with geometric 3D reasoning. Our model is able to represent complex object interactions such as inter-object occlusion, physical exclusion between objects, and geometric context. Inference in this model allows us to jointly recover the 3D scene context and perform 3D multi-object tracking from a mobile observer, for objects of multiple categories, using only monocular video as input. Contrary to many other approaches, our system performs explicit occlusion reasoning and is therefore capable of tracking objects that are partially occluded for extended periods of time, or objects that have never been observed to their full extent. In addition, we show that a joint scene tracklet model for the evidence collected over multiple frames substantially improves performance. The approach is evaluated for different types of challenging onboard sequences. We first show a substantial improvement to the state of the art in 3D multipeople tracking. Moreover, a similar performance gain is achieved for multiclass 3D tracking of cars and trucks on a challenging dataset.

  14. 基于单目视觉的微型空中机器人自主悬停控制%Autonomous hovering control based on monocular vision for micro aerial robot

    Institute of Scientific and Technical Information of China (English)

    张洪涛; 李隆球; 张广玉; 王武义

    2014-01-01

    针对微型空中机器人在室内环境下无法借助外部定位系统实现自主悬停的问题,提出一种基于单目视觉的自主悬停控制方法。采用一种四成分特征点描述符和一个多级筛选器进行特征点跟踪。根据单目视觉运动学估计机器人水平位置;根据低雷诺数下的空气阻力估计机器人飞行速度;结合位置和速度信息对机器人进行悬停控制。实验结果验证了该方法的有效性。%A hovering control method based on onboard monocular vision is proposed to hover a micro aerial robot autonomously, in which there is no external positioning system in indoor environments. A descriptor with four components and a multi-stage filter are used for feature tracking. Horizontal position is estimated according to monocular vision kinematics. Flight speed is estimated according to aerodynamic drag at low Reynolds number. Position and velocity informations are fused to hover the robot. Experimental results show the effectiveness of the proposed approach.

  15. Real-time Generation of Multi-view Naked-eye 3D Vision and Its Evaluation

    OpenAIRE

    追永, 侑平; OINAGA, Yuhei

    2013-01-01

    In recent years, with the development of three-dimensional imaging technique, as seen in moviesand games such as 3D, stereoscopic video content is becoming more familiar. Among them, theauto-stereoscopic technology can enjoy stereoscopic images without the use of special equipment,e.g., the 3D glasses. Auto-stereoscopic multi-view techniques have been studied by combiningimages which have the minute plurality of viewpoints into one image in focus. However,conventional techniques for generatin...

  16. 3D Reconstruction of Desktop of Table Tennis Based on the Binocular Vision%双目视觉的台球桌面的三维重构

    Institute of Scientific and Technical Information of China (English)

    张旭飞; 王朝立; 袁伟

    2013-01-01

    At present, with the rapid development of the computer technologies which has grown dramatically, and there are more and more sports competition where some high and new technologies are emploited as the electronic referee to assist referee to do the decisions, in order to achieve the spirit of fair play which respected by Olympic. During this process, the camera plays an important role in computer vision. 3D reconstruction is a process based on a single view or multi-view images to obtain the 3D information in the field of computer vision. In this paper, in order to complete a task about three-dimensional reconstruction of desktop of Table tennis, we take 3D reconstruction technologies in computer vision as the mainly research object, including specific details about the theoretical knowledge on the binocular vision and the reconstruction algorithm provided by the computer vision library OpenCV. The 3D reconstruction of billiards desktop have strong practical significance, which laid a good foundation for the electronic referee to work in the billiards competition.%  随着计算机技术的迅猛发展,在越来越多的体育比赛中纷纷出现了电子裁判来辅助裁判进行判罚工作,以实现奥林匹克的公平竞赛精神。而在实现这种数字化裁判的过程中,摄像机在计算机视觉中起着重要的作用,三维重构就是根据单视图或者多视图的图像重建三维信息的过程。主要以计算机视觉中的三维重构技术作为研究对象,具体详述了有关双目视觉的一些理论知识,充分利用计算机视觉库OpenCV提供的重建算法,来完成对台球桌面的三维重构工作。台球桌面的三维重构具有很强的现实意义,为电子裁判进入桌球比赛打下了良好的基础。

  17. Monocular Blindness: Is It a Handicap?

    Science.gov (United States)

    Knoth, Sharon

    1995-01-01

    Students with monocular vision may be in need of special assistance and should be evaluated by a multidisciplinary team to determine whether the visual loss is affecting educational performance. This article discusses the student's eligibility for special services, difficulty in performing depth perception tasks, difficulties in specific classroom…

  18. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot

    OpenAIRE

    Xun Chai; Feng Gao; Yang Pan; Chenkun Qi; Yilin Xu

    2015-01-01

    Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we ...

  19. Dental wear estimation using a digital intra-oral optical scanner and an automated 3D computer vision method.

    Science.gov (United States)

    Meireles, Agnes Batista; Vieira, Antonio Wilson; Corpas, Livia; Vandenberghe, Bart; Bastos, Flavia Souza; Lambrechts, Paul; Campos, Mario Montenegro; Las Casas, Estevam Barbosa de

    2016-01-01

    The objective of this work was to propose an automated and direct process to grade tooth wear intra-orally. Eight extracted teeth were etched with acid for different times to produce wear and scanned with an intra-oral optical scanner. Computer vision algorithms were used for alignment and comparison among models. Wear volume was estimated and visual scoring was achieved to determine reliability. Results demonstrated that it is possible to directly detect submillimeter differences in teeth surfaces with an automated method with results similar to those obtained by direct visual inspection. The investigated method proved to be reliable for comparison of measurements over time.

  20. Precise 3D Lug Pose Detection Sensor for Automatic Robot Welding Using a Structured-Light Vision System

    Directory of Open Access Journals (Sweden)

    Il Jae Lee

    2009-09-01

    Full Text Available In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor.

  1. Monocular visual ranging

    Science.gov (United States)

    Witus, Gary; Hunt, Shawn

    2008-04-01

    The vision system of a mobile robot for checkpoint and perimeter security inspection performs multiple functions: providing surveillance video, providing high resolution still images, and providing video for semi-autonomous visual navigation. Mid-priced commercial digital cameras support the primary inspection functions. Semi-autonomous visual navigation is a tertiary function whose purpose is to reduce the burden of teleoperation and free the security personnel for their primary functions. Approaches to robot visual navigation require some form of depth perception for speed control to prevent the robot from colliding with objects. In this paper present the initial results of an exploration of the capabilities and limitations of using a single monocular commercial digital camera for depth perception. Our approach combines complementary methods in alternating stationary and moving behaviors. When the platform is stationary, it computes a range image from differential blur in the image stack collected at multiple focus settings. When the robot is moving, it extracts an estimate of range from the camera auto-focus function, and combines this with an estimate derived from angular expansion of a constellation of visual tracking points.

  2. How little do we need for 3-D shape perception?

    Science.gov (United States)

    Nandakumar, Chetan; Torralba, Antonio; Malik, Jitendra

    2011-01-01

    How little do we need to perceive 3-D shape in monocular natural images? The shape-from-texture and shape-from-shading perspectives would motivate that 3-D perception vanishes once low-level cues are disrupted. Is this the case in human vision? Or can top-down influences salvage the percept? In this study we probe this question by employing a gauge-figure paradigm similar to that used by Koenderink et al (1992, Perception & Psychophysics 52 487-496). Subjects were presented degraded natural images and instructed to make local assessments of slant and tilt at various locations thereby quantifying their internal 3-D percept. Analysis of subjects' responses reveals recognition to be a significant influence thereby allowing subjects to perceive 3-D shape at high levels of degradation. Specifically, we identify the 'medium-blur' condition, images approximately 32 pixels on a side, to be the limit for accurate 3-D shape perception. In addition, we find that degradation affects the perceived slant of point-estimates making images look flatter as degradation increases. A subsequent condition that eliminates texture and shading but preserves contour and recognition reveals how bottom-up and top-down cues can combine for accurate 3-D shape perception.

  3. Calibration Error of Robotic Vision System of 3D Laser Scanner%机器人三维激光扫描视觉系统标定误差

    Institute of Scientific and Technical Information of China (English)

    齐立哲; 汤青; 贠超; 王京; 甘中学

    2011-01-01

    The 3D laser scanner is widely applied in industry robot vision system, but the calibration error of positional relationship between the scanner and the robot has important influence on the application of robot vision system. It is presented systematically how the scanning results are influenced by the robotic vision calibration position and orientation errors and how the workpiece positioning process is affected by the scanning result and then it is concluded that the position calibration of vision system is not necessary in the robot workpiece positioning system when there is no variation of robot scanning posture no matter whether the workpiece has posture variation or not. The validity of the theoretical analysis conclusion is verified by tests, thus providing the theoretical basis for explaining the influence of calibration error of vision system on the scanning result and for simplifying the calibration process of the vision system.%基于三维激光扫描仪的工业机器人视觉系统应用越来越广泛,而扫描仪与机器人之间位姿关系标定精度对于机器人视觉系统的应用有重要的影响.介绍基于三维激光扫描仪的机器人视觉系统的相关原理,然后在此基础上系统分析机器人视觉系统位置和姿态标定误差对工件扫描结果和根据扫描结果对工件进行定位过程的影响,得出在工件无姿态变化或有姿态变化但机器人扫描姿态不变情况下的机器人工件定位系统中无须进行视觉系统位置标定的结论,并试验验证了理论分析结论的有效性,为解释视觉系统标定误差对扫描结果的影响情况及简化视觉系统标定过程提供了理论依据.

  4. Implementation of Headtracking and 3D Stereo with Unity and VRPN for Computer Simulations

    Science.gov (United States)

    Noyes, Matthew A.

    2013-01-01

    This paper explores low-cost hardware and software methods to provide depth cues traditionally absent in monocular displays. The use of a VRPN server in conjunction with a Microsoft Kinect and/or Nintendo Wiimote to provide head tracking information to a Unity application, and NVIDIA 3D Vision for retinal disparity support, is discussed. Methods are suggested to implement this technology with NASA's EDGE simulation graphics package, along with potential caveats. Finally, future applications of this technology to astronaut crew training, particularly when combined with an omnidirectional treadmill for virtual locomotion and NASA's ARGOS system for reduced gravity simulation, are discussed.

  5. Pattern adaptation of relay cells in the lateral geniculate nucleus of binocular and monocular vision-deprived cats%双眼和单眼视觉剥夺猫外膝体细胞的图形适应

    Institute of Scientific and Technical Information of China (English)

    王伟; 寿天德

    2000-01-01

    为测定丘脑外膝体细胞的图形适应是否依赖于早期视觉经验, 在细胞外记录了双眼和单眼缝合的猫外膝体中继细胞对长时间运动光栅刺激的反应. 在双眼剥夺猫,占68%的记录到的细胞在30 s内反应下降到稳定值,其平均反应值下降33%,适应程度较正常猫显著.在单眼剥夺猫,记录到的剥夺眼驱动的和非剥夺眼驱动的细胞中,分别有占53%和44%的细胞显示图形适应, 两者差别不大.研究表明, 早期视剥夺能增强或保持图形适应, 提示图形适应是外膝体细胞常见的固有性质,可能主要由遗传因素所决定.%To test whether the pattern adaptation in thalamus is dependent upon postnatal visual experience during early life, the responses of relay cells to prolonged drifting grating stimulation were recorded extracellularly from the dorsal lateral geniculate nucleus (dLGN) of cats reared with binocular and monocular lid suture. In binocular vision-deprived cats, 68% of cells recorded showed significant adaptation to prolonged grating stimuli within 30 s, with a mean response decrease of 33%, and then stabilized gradually. This adaptation was stronger than that of relay cells in normal cats. In monocular vision-deprived cats, 53% of the cells driven by the deprived eye showed similar adaptation as did 44% of the cells driven by the non-deprived eye. These results indicate that pattern adaptation could be maintained or even enhanced after visual deprivation in early life. It is suggested that pattern adaptation is a general and intrinsic property of the dLGN cells, which may be mainly determined by genetic factors.

  6. Plane 3D effect display based on computer vision%基于计算机视觉的平面3D效果显示

    Institute of Scientific and Technical Information of China (English)

    阚洪

    2016-01-01

    The 3D display technology is the research hotpot of current computer graphic image technology,and high atten⁃tion is paid to realization of its true vision experience on the plane. In order to promote the 3D experience of users,a 3D display based on computer vision is presented,which can eliminate the reflected light interference from human eyes by means of Win⁃dows operation system,infrared image acquisition device operated by OpenCV,and visible light filter. An efficient human pupil location algorithm is proposed,by which the approximate location of human eyes is determined through gray integral projection, and then the binary image is got with the optimized threshold segmentation method. On this basis,the morphological operation for the binary image is conducted to eliminate the noise and make the image processing easier. The algorithm combined corner detection based on OpenCV with ellipse fitting is used to locate the pupil,which is simple and efficient. The camera location is realized on the basis of cvCalibrateCamera function of OpenCV. Finally,the envisaged image of 3D display was accomplished with Bresenham line drawing method. To make the system have better real⁃time performance,the scanning area segmentation method and target area prediction method are used to reduce the system computing amount and realize fast tracking of human eyes.%3D显示技术作为当前计算机图形图像技术的研究热点,以其在平面上实现真实的视觉体验而获得高度重视。为了提升用户的3D体验,提出了一种基于计算机视觉的3D显示器。借助于Windows操作系统,通过OpenCV操作红外图像采集设备,搭配可见光滤光镜,消除了人眼反射光的干扰。提出了一种比较高效的人类瞳孔定位算法,首先利用灰度积分投影将人眼的大概位置确定,之后使用优化的阈值分割方法得到二值图像,在此基础上对其进行形态学运算去除噪声,使图像更易处理。

  7. 使用计算机视觉的3D模型动作记录器%3D Model Action Recording System Using Computer Visions

    Institute of Scientific and Technical Information of China (English)

    丁志远

    2013-01-01

    该文旨在完成一款基于计算机视觉的3D模型动作记录器,即计算机通过摄像头获取人体运动视频并检测跟踪,之后通过处理数据控制3D模型,从而将人体动作进行记录保存。文章主要围绕运动目标检测、运动目标跟踪和3D建模三个方面展开研究。运动目标检测方面使用OpenCV(Open Source Computer Vision Library)提供的背景差分算法对目标进行分析并提取差分元素;运动目标跟踪方面则研究了常用的Camshift跟踪算法,实现对运动目标的连续跟踪以及识别从而保证动作记录器的连贯性;3D建模部分则使用3Dmax进行建立模型以及骨骼动画的制作处理,并使用Ogremax导出模型;而模型的骨骼动画则由OGRE导入测试环境并根据之前的处理结果进行相应的控制,从而实现人体运动的动作记录。%This paper present a 3D model action recording system using computer visions. A computer captures human motion videos with a network camera and conduct further detection and tracking of the video resources, then a 3D model was created based on the recorded data results. The action recording system includes motion target detection, motion target tracking and 3D modeling. OpenCV is used in the motion target detection where background image difference algorithm is used to analyze the moving target and extract different elements. For the motion target tracking, the Camshift tracking algorithm is used to realize continuous tracking and recognition of moving objects and ensure good performance of the action recorder. In our implementa-tion, 3Dmax is used to build the 3D model and skeletal animations, where Ogremax is used to export models, and then to im-port the skeletal animations into the test enviroment. The evaluations show that our motion recognition and recording system has good performance in one aspect, and can obtain accurate result on the other aspect.

  8. An appearance-based monocular vision approach for gaze estimation%基于表观特征的单目视觉算法实现的注视方向估计

    Institute of Scientific and Technical Information of China (English)

    张海秀; 葛宏志; 张晶

    2011-01-01

    As an important modality in Human - computer Interaction ( HCI),eye gaze provides rich information in communications.A Monocular Vision Approach (MVA) was proposed for gaze tracking under allowable head movement based on an appearance -based feature and Support Vector Regression (SVR).In MVA,only one commercial camera is used to capture a monocular face image as input,and the outputs are the head pose and gaze direction in sequence with respect to the camera coordinate system.This appearance -based feature employs a novel Directional Binary Pattern (DBP) to calculate the texture change relative to the pupil movement within the eye socket.In this method,the cropped two eye images are encoded into the high -dimensional DBP feature,which is fed into Support Vector Regression (SVR) to approximate the gaze mapping function.The 23 676 regression samples of 11 persons are clustered related to five head poses.Experimenta1 results show that this method can achieve the accuracy less than.%视线跟踪作为一种重要的人机接口模式,能够提供丰富的人机交互信息.提出了基于单目视觉的视线跟踪方法( Monocular Vision Approach,MVA).从眼部图像提取的表观特征,再经过支持向量回归( Support Vector Regression,SVR)计算实现可头部动作的注视方向估计.本方法仅用一个摄像机采集一副人脸图像作为输入数据,输出的计算结果是人的头部姿态和注视方向,以摄像机坐标系为参照系.采用的表观特征是基于方向二值模式( Directional Binary Pattern,DBP)算法,解析瞳孔在眼窝中运动引起的图像纹理变化.视线跟踪方法首先将双眼分割出来,并编码成高维的方向二值模式特征,最终通过支持向量回归作为匹配函数计算注视视角.共有11个人共23 676回归样本,按照姿态分成5个聚类集合.实验结果显示,基于本方法进行注视方向估计可以获得3°的测试误差.

  9. New weather depiction technology for night vision goggle (NVG) training: 3D virtual/augmented reality scene-weather-atmosphere-target simulation

    Science.gov (United States)

    Folaron, Michelle; Deacutis, Martin; Hegarty, Jennifer; Vollmerhausen, Richard; Schroeder, John; Colby, Frank P.

    2007-04-01

    US Navy and Marine Corps pilots receive Night Vision Goggle (NVG) training as part of their overall training to maintain the superiority of our forces. This training must incorporate realistic targets; backgrounds; and representative atmospheric and weather effects they may encounter under operational conditions. An approach for pilot NVG training is to use the Night Imaging and Threat Evaluation Laboratory (NITE Lab) concept. The NITE Labs utilize a 10' by 10' static terrain model equipped with both natural and cultural lighting that are used to demonstrate various illumination conditions, and visual phenomena which might be experienced when utilizing night vision goggles. With this technology, the military can safely, systematically, and reliably expose pilots to the large number of potentially dangerous environmental conditions that will be experienced in their NVG training flights. A previous SPIE presentation described our work for NAVAIR to add realistic atmospheric and weather effects to the NVG NITE Lab training facility using the NVG - WDT(Weather Depiction Technology) system (Colby, et al.). NVG -WDT consist of a high end multiprocessor server with weather simulation software, and several fixed and goggle mounted Heads Up Displays (HUDs). Atmospheric and weather effects are simulated using state-of-the-art computer codes such as the WRF (Weather Research μ Forecasting) model; and the US Air Force Research Laboratory MODTRAN radiative transport model. Imagery for a variety of natural and man-made obscurations (e.g. rain, clouds, snow, dust, smoke, chemical releases) are being calculated and injected into the scene observed through the NVG via the fixed and goggle mounted HUDs. This paper expands on the work described in the previous presentation and will describe the 3D Virtual/Augmented Reality Scene - Weather - Atmosphere - Target Simulation part of the NVG - WDT. The 3D virtual reality software is a complete simulation system to generate realistic

  10. Visual SLAM for Handheld Monocular Endoscope.

    Science.gov (United States)

    Grasa, Óscar G; Bernal, Ernesto; Casado, Santiago; Gil, Ismael; Montiel, J M M

    2014-01-01

    Simultaneous localization and mapping (SLAM) methods provide real-time estimation of 3-D models from the sole input of a handheld camera, routinely in mobile robotics scenarios. Medical endoscopic sequences mimic a robotic scenario in which a handheld camera (monocular endoscope) moves along an unknown trajectory while observing an unknown cavity. However, the feasibility and accuracy of SLAM methods have not been extensively validated with human in vivo image sequences. In this work, we propose a monocular visual SLAM algorithm tailored to deal with medical image sequences in order to provide an up-to-scale 3-D map of the observed cavity and the endoscope trajectory at frame rate. The algorithm is validated over synthetic data and human in vivo sequences corresponding to 15 laparoscopic hernioplasties where accurate ground-truth distances are available. It can be concluded that the proposed procedure is: 1) noninvasive, because only a standard monocular endoscope and a surgical tool are used; 2) convenient, because only a hand-controlled exploratory motion is needed; 3) fast, because the algorithm provides the 3-D map and the trajectory in real time; 4) accurate, because it has been validated with respect to ground-truth; and 5) robust to inter-patient variability, because it has performed successfully over the validation sequences.

  11. Amodal completion with background determines depth from monocular gap stereopsis.

    Science.gov (United States)

    Grove, Philip M; Ben Sachtler, W L; Gillam, Barbara J

    2006-10-01

    Grove, Gillam, and Ono [Grove, P. M., Gillam, B. J., & Ono, H. (2002). Content and context of monocular regions determine perceived depth in random dot, unpaired background and phantom stereograms. Vision Research, 42, 1859-1870] reported that perceived depth in monocular gap stereograms [Gillam, B. J., Blackburn, S., & Nakayama, K. (1999). Stereopsis based on monocular gaps: Metrical encoding of depth and slant without matching contours. Vision Research, 39, 493-502] was attenuated when the color/texture in the monocular gap did not match the background. It appears that continuation of the gap with the background constitutes an important component of the stimulus conditions that allow a monocular gap in an otherwise binocular surface to be responded to as a depth step. In this report we tested this view using the conventional monocular gap stimulus of two identical grey rectangles separated by a gap in one eye but abutting to form a solid grey rectangle in the other. We compared depth seen at the gap for this stimulus with stimuli that were identical except for two additional small black squares placed at the ends of the gap. If the squares were placed stereoscopically behind the rectangle/gap configuration (appearing on the background) they interfered with the perceived depth at the gap. However when they were placed in front of the configuration this attenuation disappeared. The gap and the background were able under these conditions to complete amodally.

  12. Research on Solution Geometric Space for Monocular Vision Measurement Method%单目视觉测量方法的求解几何空间研究

    Institute of Scientific and Technical Information of China (English)

    秦丽娟

    2013-01-01

    A new method of solution is proposed for vision measurement method with spatial straight lines intersecting at two points. This method has advantage of fast calculation and can guarantee the uniqueness of solution. The precondition of application of this method is to satisfy monotony. The geometric space is found to satisfy monotony, which ensures vision measurement method to converge to the correct solution and gives the detailed proof. Research on solving geometric space can provide a theoretical basis for application of iterative method and guide vision measurement algorithm.%针对交于两点空间直线视觉测量方法提出一种新的求解方法,这种求解方法计算速度快同时能够保证解的唯一性.应用该求解方法的前提条件是满足单调性,找到能够使得求解方法满足单调性的几何空间,从而保证视觉测量方法收敛到正确解.研究了能够使得求解方法满足单调性的几何空间并给出了详细的证明过程.求解几何空间的研究能够为迭代求解方法的应用提供理论基础,指导视觉测量算法的应用.

  13. The role of the foreshortening cue in the perception of 3D object slant.

    Science.gov (United States)

    Ivanov, Iliya V; Kramer, Daniel J; Mullen, Kathy T

    2014-01-01

    Slant is the degree to which a surface recedes or slopes away from the observer about the horizontal axis. The perception of surface slant may be derived from static monocular cues, including linear perspective and foreshortening, applied to single shapes or to multi-element textures. It is still unclear the extent to which color vision can use these cues to determine slant in the absence of achromatic contrast. Although previous demonstrations have shown that some pictures and images may lose their depth when presented at isoluminance, this has not been tested systematically using stimuli within the spatio-temporal passband of color vision. Here we test whether the foreshortening cue from surface compression (change in the ratio of width to length) can induce slant perception for single shapes for both color and luminance vision. We use radial frequency patterns with narrowband spatio-temporal properties. In the first experiment, both a manual task (lever rotation) and a visual task (line rotation) are used as metrics to measure the perception of slant for achromatic, red-green isoluminant and S-cone isolating stimuli. In the second experiment, we measure slant discrimination thresholds as a function of depicted slant in a 2AFC paradigm and find similar thresholds for chromatic and achromatic stimuli. We conclude that both color and luminance vision can use the foreshortening of a single surface to perceive slant, with performances similar to those obtained using other strong cues for slant, such as texture. This has implications for the role of color in monocular 3D vision, and the cortical organization used in 3D object perception. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Monocular transparency generates quantitative depth.

    Science.gov (United States)

    Howard, Ian P; Duke, Philip A

    2003-11-01

    Monocular zones adjacent to depth steps can create an impression of depth in the absence of binocular disparity. However, the magnitude of depth is not specified. We designed a stereogram that provides information about depth magnitude but which has no disparity. The effect depends on transparency rather than occlusion. For most subjects, depth magnitude produced by monocular transparency was similar to that created by a disparity-defined depth probe. Addition of disparity to monocular transparency did not improve the accuracy of depth settings. The magnitude of depth created by monocular occlusion fell short of that created by monocular transparency.

  15. Perception of 3D spatial relations for 3D displays

    Science.gov (United States)

    Rosen, Paul; Pizlo, Zygmunt; Hoffmann, Christoph; Popescu, Voicu S.

    2004-05-01

    We test perception of 3D spatial relations in 3D images rendered by a 3D display (Perspecta from Actuality Systems) and compare it to that of a high-resolution flat panel display. 3D images provide the observer with such depth cues as motion parallax and binocular disparity. Our 3D display is a device that renders a 3D image by displaying, in rapid succession, radial slices through the scene on a rotating screen. The image is contained in a glass globe and can be viewed from virtually any direction. In the psychophysical experiment several families of 3D objects are used as stimuli: primitive shapes (cylinders and cuboids), and complex objects (multi-story buildings, cars, and pieces of furniture). Each object has at least one plane of symmetry. On each trial an object or its "distorted" version is shown at an arbitrary orientation. The distortion is produced by stretching an object in a random direction by 40%. This distortion must eliminate the symmetry of an object. The subject's task is to decide whether or not the presented object is distorted under several viewing conditions (monocular/binocular, with/without motion parallax, and near/far). The subject's performance is measured by the discriminability d', which is a conventional dependent variable in signal detection experiments.

  16. The design of a traffic environment prewarning system based on monocular vision%基于单目视觉的行车环境安全预警系统设计

    Institute of Scientific and Technical Information of China (English)

    邓筠; 沈文超; 徐建闽; 游峰

    2015-01-01

    This thesis aims to design a traffic environment prewarning system which is based on monocular vision and consists of three modules including CCD image acquisition module,driving environment detection module,and traffic environment danger alert module. The research focuses on new methods of traffic lane line extraction--preceding vehicle detection and traffic environment recognition,and also make description of the system hardware architecture and software process and algorithm,which can identify the highway traffic environment and warn drivers in case of dangerous situations. Experimental results show that the system can detect front lane and vehicles accurately to achieve the design effects.%文中设计了基于机器视觉的行车环境安全预警系统,采用包含视频图像采集、行车环境检测以及行车环境安全预警3个功能模块的系统结构,重点研究并提出了新的行车环境识别方法,包含车道线提取和前方车辆检测方法,并对系统硬件架构及软件流程和算法进行说明,实现高速公路行车环境的识别并进行危险警示。实验结果显示,系统能够准确地对前方车道线和车辆进行检测,实现设计效果。

  17. Monocular Video Guided Garment Simulation

    Institute of Scientific and Technical Information of China (English)

    Fa-Ming Li; Xiao-Wu Chen∗; Bin Zhou; Fei-Xiang Lu; Kan Guo; Qiang Fu

    2015-01-01

    We present a prototype to generate a garment-shape sequence guided by a monocular video sequence. It is a combination of a physically-based simulation and a boundary-based modification. Given a garment in the video worn on a mannequin, the simulation generates a garment initial shape by exploiting the mannequin shapes estimated from the video. The modification then deforms the simulated 3D shape into such a shape that matches the garment 2D boundary extracted from the video. According to the matching correspondences between the vertices on the shape and the points on the boundary, the modification is implemented by attracting the matched vertices and their neighboring vertices. For best-matching correspondences and efficient performance, three criteria are introduced to select the candidate vertices for matching. Since modifying each garment shape independently may cause inter-frame oscillations, changes by the modification are also propagated from one frame to the next frame. As a result, the generated garment 3D shape sequence is stable and similar to the garment video sequence. We demonstrate the effectiveness of our prototype with a number of examples.

  18. The role of monocularly visible regions in depth and surface perception.

    Science.gov (United States)

    Harris, Julie M; Wilcox, Laurie M

    2009-11-01

    The mainstream of binocular vision research has long been focused on understanding how binocular disparity is used for depth perception. In recent years, researchers have begun to explore how monocular regions in binocularly viewed scenes contribute to our perception of the three-dimensional world. Here we review the field as it currently stands, with a focus on understanding the extent to which the role of monocular regions in depth perception can be understood using extant theories of binocular vision.

  19. Development of three types of multifocus 3D display

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Dong Wook

    2011-06-01

    Three types of multi-focus(MF) 3D display are developed and possibility about monocular depth cue is tested. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving multi-focus function, we developed 3D display system for each eye, which can satisfy accommodation to displayed virtual objects within defined depth. The first MF 3D display is developed via laser scanning method, the second MF 3D display uses LED array for light source, and the third MF 3D display uses slated LED array for full parallax monocular depth cue. The full parallax MF 3D display system gives omnidirectional focus effect. The proposed 3D display systems have a possibility of solving eye fatigue problem that comes from the mismatch between the accommodation of each eye and the convergence of two eyes. The monocular accommodation is tested and a proof of the satisfaction of the full parallax accommodation is given as a result of the proposed full parallax MF 3D display system. We achieved a result that omni-directional focus adjustment is possible via parallax images.

  20. 基于单目视觉的室内微型飞行器位姿估计与环境构建%Monocular Vision Based Motion Estimation of Indoor Micro Air Vehicles and Structure Recovery

    Institute of Scientific and Technical Information of China (English)

    郭力; 昂海松; 郑祥明

    2012-01-01

    Micro air vehicles (MAVs) need reliable attitude and position information in indoor environment. The measurements of onboard inertial measurement unit (IMU) sensors such as gyros and acce-larometers are corrupted by large accumulated errors, and GPS signal is unavailable in such situation. Therefore, a monocular vision based indoor MAV motion estimation and structure recovery method is presented. Firstly, the features are tracked by biological vision based matching algorithm through the image sequence, and the motion of camra is estimated by the five-point algorithm. In the indoor enviro-ment, the planar relationship is used to reduce the feature point dimentions from three to two. Then, these parameters are optimized by an local strategy to improve the motion estimation and structure recovery accuracy. The measurements of IMU sensors and vision module are fused with extended Kalman fileter. The attitude and position information of MAVs is estimated. The experiment shows that the method can reliably estimate the indoor motion of MAV in real-time, and the recovered enviroment information can be used for navigation of MAVs.%针对微型飞行嚣(Micro air vehicle,MAV)在室内飞行过程中无法获得GPS信号,而微型惯性单元(Inertial measurement unit,IMU)的陀螺仪和加速度计随机漂移误差较大,提出一种利用单目视觉估计微型飞行嚣位姿并构建室内环境的方法.在机载单目摄像机拍摄的序列图像中引入一种基于生物视觉的方法获得匹配特征点,并由五点算法获得帧间摄像机运动参数和特征点位置参数的初始解;利用平面关系将特征点的位置信息由三维降低到二维,给出一种局部优化方法求解摄像机运动参数和特征点位置参数的最大似然估计,提高位姿估计和环境构建的精度.最后通过扩展卡尔曼滤波方法融合IMU传感器和单目视觉测量信息解算出微型飞行器的位姿.实验结果表明,该方法能够实时可

  1. TTC Calculation and Characteristic Parameters Study Based on Monocular Vision%基于单目视觉的车间TTC计算及追尾危险工况特征参数研究

    Institute of Scientific and Technical Information of China (English)

    许宇能; 朱西产; 李霖; 马志雄

    2014-01-01

    In order to study the characteristic parameters of the cases of near -crash, this paper focus on data extraction and evaluation of rear -end near-crash cases which were collected in the last five years .First, Time to Collistion(TTC) during the cases was calculated using merely information from monocular vision .Then, statistical analysis was conducted for the parameters including TTC of normal state of following vehicles , TTC and velocity of emergency braking , TTC of the most dangerous moment .Experiment results show that 95 percent of rear-end near-crashes happen below 45 km/h, and the average of braking deceleration is 0.51g, the TTC of normal following, start-braking moment and most dangerous moment are 2.9s, 2.0s, 1.0s respectively.%为了研究车辆追尾危险工况的特征参数,本文针对最近五年来采集到的追尾危险工况数据进行特征参数提取和分析.首先利用单目图像信息,计算危险发生过程中车辆间的碰撞时间( TTC),然后对车辆正常跟车状态下TTC值、开始紧急制动时的速度、TTC值、制动减速度和最危险时刻TTC值等参数进行统计分析.实验结果显示,有95%的追尾危险发生在45 km/h以下,驾驶员制动时产生的制动减速度均值为0.51g,驾驶员正常跟车时、开始制动时、最危险时的TTC值分别为2.9s,2.0s,1.0s.

  2. 基于单目视觉的跟驰车辆车距测量方法%Method of vehicle distance measurement for following car based on monocular vision

    Institute of Scientific and Technical Information of China (English)

    余厚云; 张为公

    2012-01-01

    为了解决结构化道路上跟驰车辆的防追尾碰撞问题,首先在对车辆制动模型进行分析的基础上得到了车辆制动距离的计算公式,进而计算出跟驰车辆与前方车辆之间的安全距离.然后,从针孔模型摄像机成像的基本原理出发,推导出基于图像中车道线消失点的车距测量公式.车距测量结果只与图像中的近视场点到摄像机的实际距离有关,无需对所有的摄像机参数进行标定,从而解决了单目视觉车距测量问题.最后,完成了不同距离处前方车辆的车距测量试验.试验结果表明,该方法的车距测量相对误差小于3%,具备了较高的检测精度,能够满足跟驰车辆防追尾碰撞的应用要求.%To solve the problem of rear collision avoidance for the following car on structural road, the formula of braking distance is obtained based on the analysis of the vehicle braking model, and the safety distance is then calculated accordingly. Then, using the basic theory of imaging of a pin-hole model camera, the formula of vehicle distance measurement is deduced based on the vanishing point of lane lines. The formula is related only to the actual distance between the camera and the point of near field of view without calibrating all of the camera parameters, and the vehicle distance measurement can be realized with monocular vision. Finally, experiments for the measurement are performed with the preceding vehicle at different positions. Experimental results demonstrate that the relative error of vehicle distance measurement is less than 3% and the precision can meet the application of collision avoidance for the following car.

  3. Integration of Video Images and CAD Wireframes for 3d Object Localization

    Science.gov (United States)

    Persad, R. A.; Armenakis, C.; Sohn, G.

    2012-07-01

    The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ) surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT). Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  4. Visual-tracking-based robot vision system

    Science.gov (United States)

    Deng, Keqiang; Wilson, Joseph N.; Ritter, Gerhard X.

    1992-11-01

    There are two kinds of depth perception for robot vision systems: quantitative and qualitative. The first one can be used to reconstruct the visible surfaces numerically while the second to describe the visible surfaces qualitatively. In this paper, we present a qualitative vision system suitable for intelligent robots. The goal of such a system is to perceive depth information qualitatively using monocular 2-D images. We first establish a set of propositions relating depth information, such as 3-D orientation and distance, to the changes of image region caused by camera motion. We then introduce an approximation-based visual tracking system. Given an object, the tracking system tracks its image while moving the camera in a way dependent upon the particular depth property to be perceived. Checking the data generated by the tracking system with our propositions provides us the depth information about the object. The visual tracking system can track image regions in real-time even as implemented on a PC AT clone machine, and mobile robots can naturally provide the inputs to our visual tracking system, therefore, we are able to construct a real-time, cost effective, monocular, qualitative and 3-dimensional robot vision system. To verify our idea, we present examples of perception of planar surface orientation, distance, size, dimensionality and convexity/concavity.

  5. Modeling and display of 3D human body based on monocular vision measurement%基于单目视觉测量的人体建模与显示

    Institute of Scientific and Technical Information of China (English)

    盛光有; 姜寿山; 张欣; 崔芳芳

    2009-01-01

    以一种基于单目视觉测量原理的三维人体扫描装置获得的人体数据为来源,运用三角面片法构建人体表面,并把人体模型保存为一种标准的模型格式文件--OBJ文件.在Visual C++的编程环境中采用OpenGL(Open Graphics Library)作为图形接口,编程显示了人体模型.

  6. TITLE: VISION OF THE RECONSTRUCTION OF DESTRUCTED MONUMENTS OF PALMYRA (3D AS A STEP TO REHABILIATE AND PRESERVE THE WHOLESITE

    Directory of Open Access Journals (Sweden)

    A. Arkawi

    2017-08-01

    Full Text Available Syria is one of the world’s most impressive Cultural Heritages in terms of the number and historical significance of its monuments. Palmyra lies in the heart of Syria, an oasis in the midst of the arid desert.it could be considered as a part of this human heritage. In1980 was registered on the world and national heritage list for its huge historical importance. In addition, it was the focus of many studies and researches in the fields of restoration. Then the disaster happened, many monuments were demolished, temple of Ba’al, temple of Bael-shameen, Arch of triumph and the Castle. Lately the Tetrapylon and the Stag. Every Syrian was hurt, the whole world was hurt. The destruction of the city caused its people to become homeless and Palmyra was no longer the oasis we know. We felt pain, so we wanted to make a move, a step forward, to present a work that expresses our love for Palmyra, we organized Palmyra workshop to provide a vision for the reconstruction and revival of the historic site importance. Visions with using new idea & new technology. Palmyra historical areas are considered a large open museum for heritage through history, which is the reason to treat these area as a historical protection precinct and give a vision, ideas, suggestions to the future of Palmary as a first step to preserve the historical buildings& the archeological park.

  7. Implementation of 3d Tools and Immersive Experience Interaction for Supporting Learning in a Library-Archive Environment. Visions and Challenges

    Science.gov (United States)

    Angeletaki, A.; Carrozzino, M.; Johansen, S.

    2013-07-01

    In this paper we present an experimental environment of 3D books combined with a game application that has been developed by a collaboration project between the Norwegian University of Science and Technology in Trondheim, Norway the NTNU University Library, and the Percro laboratory of Santa Anna University in Pisa, Italy. MUBIL is an international research project involving museums, libraries and ICT academy partners aiming to develop a consistent methodology enabling the use of Virtual Environments as a metaphor to present manuscripts content through the paradigms of interaction and immersion, evaluating different possible alternatives. This paper presents the results of the application of two prototypes of books augmented with the use of XVR and IL technology. We explore immersive-reality design strategies in archive and library contexts for attracting new users. Our newly established Mubil-lab has invited school classes to test the books augmented with 3D models and other multimedia content in order to investigate whether the immersion in such environments can create wider engagement and support learning. The metaphor of 3D books and game designs in a combination allows the digital books to be handled through a tactile experience and substitute the physical browsing. In this paper we present some preliminary results about the enrichment of the user experience in such environment.

  8. Rethinking Robot Vision - Combining Shape and Appearance

    Directory of Open Access Journals (Sweden)

    Matthias J. Schlemmer

    2008-11-01

    Full Text Available Equipping autonomous robots with vision sensors provides a multitude of advantages by simultaneously bringing up difficulties with regard to different illumination conditions. Furthermore, especially with service robots, the objects to be handled must somehow be learned for a later manipulation. In this paper we summarise work on combining two different vision sensors, namely a laser range scanner and a monocular colour camera, for shape-capturing, detecting and tracking of objects in cluttered scenes without the need of intermediate user interaction. The use of different sensor types provides the advantage of separating the shape and the appearance of the object and therefore overcome the problem with changing illumination conditions. We describe the framework and its components of visual shape-capturing, fast 3D object detection and robust tracking as well as examples that show the feasibility of this approach.

  9. Data Extraction from Computer Acquired Images of a Given 3D Environment for Enhanced Computer Vision and its Applications in Kinematic Design of Robos

    Directory of Open Access Journals (Sweden)

    K. Selvaraj

    2010-01-01

    Full Text Available Problem statement: Literature review was mainly aiming at recognition of objects by the computer and to make explicit the information that is implicit in the attributes of 3D objects and their relative positioning in the 3D Environment (3DE as seen in the 2D images. However quantitative estimate of position of objects in the 3DE in terms of their x, y and z co-ordinates was not touched upon. This issue assumes important dimension in areas like Kinematic Design of Robos (KDR, while the Robo is negotiating with z field or Depth Field (DF. Approach: The existing methods such as pattern matching used by Robos for Depth Visualization (DV using a set of external commands, were reviewed in detail. A methodology was developed in this study to enable the Robo to quantify the depth by itself, instead of looking for external commands. Results: The Results are presented and discussed. The Results are presented and discussed. The major conclusions drawn based on the results were listed. Conclusion: The major contribution of the present study consists of computing the Depth (D1 corresponding to the depth (d measured from the photographic image of a 3DE. It had been concluded that, there exists an excellent agreement between the computed depth D1 and the corresponding actual Depth (D. The percent deviation of D1 from D (DP lies between ±2 over the entire region of the (DF. Through suitable interfacing of the developed equation with the kinematic design of Robos, the Robo can generate its own commands for DF negotiations.

  10. Reversible monocular cataract simulating amaurosis fugax.

    Science.gov (United States)

    Paylor, R R; Selhorst, J B; Weinberg, R S

    1985-07-01

    In a patient having brittle, juvenile-onset diabetes, transient monocular visual loss occurred repeatedly whenever there were wide fluctuations in serum glucose. Amaurosis fugax was suspected. The visual loss differed, however, in that it persisted over a period of hours to several days. Direct observation eventually revealed that the relatively sudden change in vision of one eye was associated with opacification of the lens and was not accompanied by an afferent pupillary defect. Presumably, a hyperosmotic gradient had developed with the accumulation of glucose and sorbitol within the lens. Water was drawn inward, altering the composition of the lens fibers and thereby lowering the refractive index, forming a reversible cataract. Hypoglycemia is also hypothesized to have played a role in the formation of a higher osmotic gradient. The unilaterality of the cataract is attributed to variation in the permeability of asymmetric posterior subcapsular cataracts.

  11. Development and implementation of product interaction design platform based on 3D virtual vision%基于三维虚拟视觉的产品交互设计平台的开发与实现

    Institute of Scientific and Technical Information of China (English)

    张璐琪

    2016-01-01

    当前产品交互平台建模大都采用二维显示的方式,存在交互性低、工作量过大以及难更新等缺陷,因此将三维虚拟视觉融入产品交互设计平台中,依据三维虚拟视觉技术塑造产品交互场景,通过三维数字建模模拟产品信息,实现基于三维虚拟视觉的产品交互设计平台开发。该平台包括交互设计模块、三维虚拟视觉展示模块以及交互展示模块。分析了产品交互功能的实现过程,主要包括三维虚拟视觉建模、动画展示过程、虚拟交互设计和平台发布。给出产品交互平台的人机交互界面设计过程,以及实现产品外形三维虚拟视觉展示的主要代码。实验结果表明,所设计产品交互平台,具有较高的认可度和可用性。%The mode of two⁃dimensional display is adopted in modeling of the most current product interactive platforms, which has the defects of low interaction,excessive workload and difficult update. Therefore,3D virtual vision is integrated into the product interaction design platform to mould the product interaction scene according to 3D virtual visual technology,simu⁃late the product information by means of 3D digital modeling technology,and then realize the development of the product interac⁃tion design platform based on 3D virtual vision. The platform includes interaction design module,3D virtual visual display mod⁃ule and interactive display module. The realization process of the interaction function is analyzed,mainly including 3D virtual vi⁃sual modeling,animation display process,virtual interaction design and platform release. The human⁃computer interaction inter⁃face design process of product interaction platform and the main code of 3D virtual visual display to realize the product appear⁃ance are given. The experimental results indicate that the designed product interactive platform has high recognition degree and availability.

  12. Is binocular vision worth considering in people with low vision?

    Science.gov (United States)

    Uzdrowska, Marta; Crossland, Michael; Broniarczyk-Loba, Anna

    2014-01-01

    In someone with good vision, binocular vision provides benefits which could not be obtained by monocular viewing only. People with visual impairment often have abnormal binocularity. However, they often use both eyes simultaneously in their everyday activities. Much remains to be known about binocular vision in people with visual impairment. As the binocular status of people with low vision strongly influences their treatment and rehabilitation, it should be evaluated and considered before diagnosis and further recommendations.

  13. 3D gesture interactive system based on computer vision%基于计算机视觉的3D手势交互系统

    Institute of Scientific and Technical Information of China (English)

    霍鹏飞

    2016-01-01

    随着计算机的广泛发展,键盘、鼠标等传统的人机交互方式很难满足用户自然、便捷的交互需求。研究手势建模、人手跟踪和手势交互系统的应用成为热点趋势。提出了一种简化的2D人手模型,该模型将人手建模为掌心点和5根手指,同时设计了一种基于粒子群优化(PSO)算法的人手跟踪方法,通过建模人手的生理和运动学约束关系,实现了基于2D/3D人手模型的PSO人手跟踪,该手势交互系统框架更具适用性和扩展性,融合了语义和反馈信息,提高了人手跟踪的鲁棒性和手势识别的准确度。%With the wide development of the computer,the keyboard,mouse and other traditional human⁃computer interaction modes are difficult to meet the users′ natural and convenient interaction needs. The study on the applications of gesture modeling, hand tracking and gesture interactive system has become a hotspot. A simplified 2D hand model is proposed in this paper,in which the hand is modeled as the palm point and 5 fingers. A hand tracking method based on particle swarm optimization(PSO) algorithm was designed,and the PSO hand tracking based on 2D/3D hand model was realized by modeling the physiology and kine⁃matics constraint relation of the hand. The framework of this gesture interactive system has applicability and scalability. It fused the semantics and feedback information,and improved the robustness of hand tracking and accuracy of gesture recognition.

  14. Novel computer vision algorithm for the reliable analysis of organelle morphology in whole cell 3D images--A pilot study for the quantitative evaluation of mitochondrial fragmentation in amyotrophic lateral sclerosis.

    Science.gov (United States)

    Lautenschläger, Janin; Lautenschläger, Christian; Tadic, Vedrana; Süße, Herbert; Ortmann, Wolfgang; Denzler, Joachim; Stallmach, Andreas; Witte, Otto W; Grosskreutz, Julian

    2015-11-01

    The function of intact organelles, whether mitochondria, Golgi apparatus or endoplasmic reticulum (ER), relies on their proper morphological organization. It is recognized that disturbances of organelle morphology are early events in disease manifestation, but reliable and quantitative detection of organelle morphology is difficult and time-consuming. Here we present a novel computer vision algorithm for the assessment of organelle morphology in whole cell 3D images. The algorithm allows the numerical and quantitative description of organelle structures, including total number and length of segments, cell and nucleus area/volume as well as novel texture parameters like lacunarity and fractal dimension. Applying the algorithm we performed a pilot study in cultured motor neurons from transgenic G93A hSOD1 mice, a model of human familial amyotrophic lateral sclerosis. In the presence of the mutated SOD1 and upon excitotoxic treatment with kainate we demonstrate a clear fragmentation of the mitochondrial network, with an increase in the number of mitochondrial segments and a reduction in the length of mitochondria. Histogram analyses show a reduced number of tubular mitochondria and an increased number of small mitochondrial segments. The computer vision algorithm for the evaluation of organelle morphology allows an objective assessment of disease-related organelle phenotypes with greatly reduced examiner bias and will aid the evaluation of novel therapeutic strategies on a cellular level.

  15. Enhanced perception of terrain hazards in off-road path choice: stereoscopic 3D versus 2D displays

    Science.gov (United States)

    Merritt, John O.; CuQlock-Knopp, V. Grayson; Myles, Kimberly

    1997-06-01

    Off-road mobility at night is a critical factor in modern military operations. Soldiers traversing off-road terrain, both on foot and in combat vehicles, often use 2D viewing devices (such as a driver's thermal viewer, or biocular or monocular night-vision goggles) for tactical mobility under low-light conditions. Perceptual errors can occur when 2D displays fail to convey adequately the contours of terrain. Some off-road driving accidents have been attributed to inadequate perception of terrain features due to using 2D displays (which do not provide binocular-parallax cues to depth perception). In this study, photographic images of terrain scenes were presented first in conventional 2D video, and then in stereoscopic 3D video. The percentage of possible correct answers for 2D and 3D were: 2D pretest equals 52%, 3D pretest equals 80%, 2D posttest equals 48%, 3D posttest equals 78%. Other recent studies conducted at the US Army Research Laboratory's Human Research and Engineering Directorate also show that stereoscopic 3D displays can significantly improve visual evaluation of terrain features, and thus may improve the safety and effectiveness of military off-road mobility operation, both on foot and in combat vehicles.

  16. Real-time 3D bare-hand gesture recognition using binocular vision videos%利用双目视觉视频的实时三维裸手手势识别

    Institute of Scientific and Technical Information of China (English)

    公衍超; 万帅; 杨楷芳; 陈浩; 李波

    2014-01-01

    为了解决三维裸手手势识别算法识别率低、易受类肤色物体干扰的问题,提出一种利用双目视觉视频的三维裸手手势识别算法。首先依据双目视觉原理推导出三维空间内手势深度与手势面积的关系,基于此关系对三维手势进行快速识别。为进一步降低算法复杂度,根据极线约束规则提出一种只计算手势质心匹配点的立体匹配算法。实验结果表明,与现有算法相比,所提算法性能在处理速度、识别准确率、鲁棒性方面均有明显提高。同时,提出的算法具有较强的开放性,可进一步根据需求定义、添加需识别的三维手势。%Current bare-hand based gesture recognition algorithms generally have the problems of low recognition accuracy and being prone to be affected by skin-like objects.In this paper,a 3D bare-hand gesture recognition algorithm is proposed using binocular vision videos.Firstly,a relationship between the depth and area of the gesture is achieved according to the principle of binocular vision,on the basis of which fast 3D gesture recognition is realized.To further speed up the method,a fast stereo matching algorithm is proposed following the epipolar line constraint rule,which regards the gesture’s centroid as the matching point.Experimental results have demonstrated that compared with existing algorithms the proposed algorithm significantly improves the performance in processing speed, recognition accuracy, and robustness.It should be noted that the proposed algorithm is open,where more 3D gestures can be easily added upon requirement.

  17. Monocular and binocular edges enhance the perception of stereoscopic slant.

    Science.gov (United States)

    Wardle, Susan G; Palmisano, Stephen; Gillam, Barbara J

    2014-07-01

    Gradients of absolute binocular disparity across a slanted surface are often considered the basis for stereoscopic slant perception. However, perceived stereo slant around a vertical axis is usually slow and significantly under-estimated for isolated surfaces. Perceived slant is enhanced when surrounding surfaces provide a relative disparity gradient or depth step at the edges of the slanted surface, and also in the presence of monocular occlusion regions (sidebands). Here we investigate how different kinds of depth information at surface edges enhance stereo slant about a vertical axis. In Experiment 1, perceived slant decreased with increasing surface width, suggesting that the relative disparity between the left and right edges was used to judge slant. Adding monocular sidebands increased perceived slant for all surface widths. In Experiment 2, observers matched the slant of surfaces that were isolated or had a context of either monocular or binocular sidebands in the frontal plane. Both types of sidebands significantly increased perceived slant, but the effect was greater with binocular sidebands. These results were replicated in a second paradigm in which observers matched the depth of two probe dots positioned in front of slanted surfaces (Experiment 3). A large bias occurred for the surface without sidebands, yet this bias was reduced when monocular sidebands were present, and was nearly eliminated with binocular sidebands. Our results provide evidence for the importance of edges in stereo slant perception, and show that depth from monocular occlusion geometry and binocular disparity may interact to resolve complex 3D scenes. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. A Bayesian model of binocular perception of 3D mirror symmetrical polyhedra.

    Science.gov (United States)

    Li, Yunfeng; Sawada, Tadamasa; Shi, Yun; Kwon, Taekyu; Pizlo, Zygmunt

    2011-04-19

    In our previous studies, we showed that monocular perception of 3D shapes is based on a priori constraints, such as 3D symmetry and 3D compactness. The present study addresses the nature of perceptual mechanisms underlying binocular perception of 3D shapes. First, we demonstrate that binocular performance is systematically better than monocular performance, and it is close to perfect in the case of three out of four subjects. Veridical shape perception cannot be explained by conventional binocular models, in which shape was derived from depth intervals. In our new model, we use ordinal depth of points in a 3D shape provided by stereoacuity and combine it with monocular shape constraints by means of Bayesian inference. The stereoacuity threshold used by the model was estimated for each subject. This model can account for binocular shape performance of all four subjects. It can also explain the fact that when viewing distance increases, the binocular percept gradually reduces to the monocular one, which implies that monocular percept of a 3D shape is a special case of the binocular percept.

  19. Gestalt-like constraints produce veridical (Euclidean) percepts of 3D indoor scenes.

    Science.gov (United States)

    Kwon, TaeKyu; Li, Yunfeng; Sawada, Tadamasa; Pizlo, Zygmunt

    2016-09-01

    This study, which was influenced a lot by Gestalt ideas, extends our prior work on the role of a priori constraints in the veridical perception of 3D shapes to the perception of 3D scenes. Our experiments tested how human subjects perceive the layout of a naturally-illuminated indoor scene that contains common symmetrical 3D objects standing on a horizontal floor. In one task, the subject was asked to draw a top view of a scene that was viewed either monocularly or binocularly. The top views the subjects reconstructed were configured accurately except for their overall size. These size errors varied from trial to trial, and were shown most-likely to result from the presence of a response bias. There was little, if any, evidence of systematic distortions of the subjects' perceived visual space, the kind of distortions that have been reported in numerous experiments run under very unnatural conditions. This shown, we proceeded to use Foley's (Vision Research 12 (1972) 323-332) isosceles right triangle experiment to test the intrinsic geometry of visual space directly. This was done with natural viewing, with the impoverished viewing conditions Foley had used, as well as with a number of intermediate viewing conditions. Our subjects produced very accurate triangles when the viewing conditions were natural, but their performance deteriorated systematically as the viewing conditions were progressively impoverished. Their perception of visual space became more compressed as their natural visual environment was degraded. Once this was shown, we developed a computational model that emulated the most salient features of our psychophysical results. We concluded that human observers see 3D scenes veridically when they view natural 3D objects within natural 3D environments. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Vision Based SLAM in Dynamic Scenes

    Science.gov (United States)

    2012-12-20

    understanding [20), or to improve the system accu- racy and robustness, such as ’ loop closure’ [16), ’re- localization’ [36), and dense depth map...to combine the advantages of omnidirection vision [37] and monocular vision. Castle et al. [5] used multiple cameras distributed freely in a...T. Drummond. Scalable monocular SlAM. In IEEE Proc. of CVPR, volume 1, pages 469-476, 2006. (13) G. Golub. Nume rical methods for solving linea r

  1. 3D Animation Essentials

    CERN Document Server

    Beane, Andy

    2012-01-01

    The essential fundamentals of 3D animation for aspiring 3D artists 3D is everywhere--video games, movie and television special effects, mobile devices, etc. Many aspiring artists and animators have grown up with 3D and computers, and naturally gravitate to this field as their area of interest. Bringing a blend of studio and classroom experience to offer you thorough coverage of the 3D animation industry, this must-have book shows you what it takes to create compelling and realistic 3D imagery. Serves as the first step to understanding the language of 3D and computer graphics (CG)Covers 3D anim

  2. A new combination of monocular and stereo cues for dense disparity estimation

    Science.gov (United States)

    Mao, Miao; Qin, Kaihuai

    2013-07-01

    Disparity estimation is a popular and important topic in computer vision and robotics. Stereo vision is commonly done to complete the task, but most existing methods fail in textureless regions and utilize numerical methods to interpolate into these regions. Monocular features are usually ignored, which may contain helpful depth information. We proposed a novel method combining monocular and stereo cues to compute dense disparities from a pair of images. The whole image regions are categorized into reliable regions (textured and unoccluded) and unreliable regions (textureless or occluded). Stable and accurate disparities can be gained at reliable regions. Then for unreliable regions, we utilize k-means to find the most similar reliable regions in terms of monocular cues. Our method is simple and effective. Experiments show that our method can generate a more accurate disparity map than existing methods from images with large textureless regions, e.g. snow, icebergs.

  3. 3D motion analysis via energy minimization

    Energy Technology Data Exchange (ETDEWEB)

    Wedel, Andreas

    2009-10-16

    This work deals with 3D motion analysis from stereo image sequences for driver assistance systems. It consists of two parts: the estimation of motion from the image data and the segmentation of moving objects in the input images. The content can be summarized with the technical term machine visual kinesthesia, the sensation or perception and cognition of motion. In the first three chapters, the importance of motion information is discussed for driver assistance systems, for machine vision in general, and for the estimation of ego motion. The next two chapters delineate on motion perception, analyzing the apparent movement of pixels in image sequences for both a monocular and binocular camera setup. Then, the obtained motion information is used to segment moving objects in the input video. Thus, one can clearly identify the thread from analyzing the input images to describing the input images by means of stationary and moving objects. Finally, I present possibilities for future applications based on the contents of this thesis. Previous work in each case is presented in the respective chapters. Although the overarching issue of motion estimation from image sequences is related to practice, there is nothing as practical as a good theory (Kurt Lewin). Several problems in computer vision are formulated as intricate energy minimization problems. In this thesis, motion analysis in image sequences is thoroughly investigated, showing that splitting an original complex problem into simplified sub-problems yields improved accuracy, increased robustness, and a clear and accessible approach to state-of-the-art motion estimation techniques. In Chapter 4, optical flow is considered. Optical flow is commonly estimated by minimizing the combined energy, consisting of a data term and a smoothness term. These two parts are decoupled, yielding a novel and iterative approach to optical flow. The derived Refinement Optical Flow framework is a clear and straight-forward approach to

  4. Flash 3D Rendezvous and Docking Sensor Project

    Data.gov (United States)

    National Aeronautics and Space Administration — 3D Flash Ladar is a breakthrough technology for many emerging and existing 3D vision areas, and sensor improvements will have an impact on nearly all these fields....

  5. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... techniques used to create the 3-D effect can confuse or overload the brain, causing some people ... images. That does not mean that vision disorders can be caused by 3-D digital products. However, ...

  6. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    Science.gov (United States)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  7. 3D Medical Electronic Endoscope System Based on Binocular Stereo Vision%基于双目立体视觉的医用三维电子内窥镜系统

    Institute of Scientific and Technical Information of China (English)

    冯大伟; 姜会林; 张光伟

    2012-01-01

    为了施行较为复杂的微创外科手术,设计了医用三维电子内窥镜系统,对系统的工作原理、显示方式等进行了分析.该系统基于双目立体视觉原理,采用双CCD摄像机模拟人眼实现了机器立体视觉;利用Zemax软件辅助设计了双光路内窥镜光学系统;通过比较几种立体显示方式的优缺点,选择了无源偏振眼镜立体显示方法,该方法采用基于FPGA的时分制立体显示技术,左右图像以100Hz帧频交替显示,并与液晶调制屏同步,观察者佩戴偏振眼镜即可观察到清晰无闪烁立体图像.经临床试验表明:简化了的光学系统的电子内窥镜图像质量得到显著提高;图像帧频达到100Hz,无闪烁感,且无停顿、拖尾、扭曲和停滞等现象.该系统可广泛应用于微创外科手术中,并可作为腹腔手术机器人的显示系统使用.%In order to perform more complex minimally invasive surgery, we designed 3D medical electronic endoscope system, and analyzed the system principle and display mode. The system based on the principle of binocular stereo vision achieves 3D machine vision, adopting the dual CCD cameras to simulate the human eye. A dual-optical endoscope system is designed by "Zemax" software. We choose the passive polarized glasses stereoscopic display through comparing the advantages and disadvantages of several 3D display ways, and this way adopts the time-division 3D display technology based on FPGA, the left and right images alternately display with 100Hz frame rate, and synchronize with the modulated LCD screen. When the viewers wear polarized glasses, they can observe the clear stereoscopic images without flicker. The clinical tests indicate that the images quality of electronic endoscope significantly improved by simplified optical system, the image frame rate reaches to 100Hz and is not flickering. The images have not stop, tailing, twist, and stagnation. The system can be widely used in minimally

  8. A Case of Functional (Psychogenic Monocular Hemianopia Analyzed by Measurement of Hemifield Visual Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Tsuyoshi Yoneda

    2013-12-01

    Full Text Available Purpose: Functional monocular hemianopia is an extremely rare condition, for which measurement of hemifield visual evoked potentials (VEPs has not been previously described. Methods: A 14-year-old boy with functional monocular hemianopia was followed up with Goldmann perimetry and measurement of hemifield and full-field VEPs. Results: The patient had a history of monocular temporal hemianopia of the right eye following headache, nausea and ague. There was no relative afferent pupillary defect, and a color perception test was normal. Goldmann perimetry revealed a vertical monocular temporal hemianopia of the right eye; the hemianopia on the right was also detected with a binocular visual field test. Computed tomography, magnetic resonance imaging (MRI and MR angiography of the brain including the optic chiasm as well as orbital MRI revealed no abnormalities. On the basis of these results, we diagnosed the patient's condition as functional monocular hemianopia. Pattern VEPs according to the International Society for Clinical Electrophysiology of Vision (ISCEV standard were within the normal range. The hemifield pattern VEPs for the right eye showed a symmetrical latency and amplitude for nasal and temporal hemifield stimulation. One month later, the visual field defect of the patient spontaneously disappeared. Conclusions: The latency and amplitude of hemifield VEPs for a patient with functional monocular hemianopia were normal. Measurement of hemifield VEPs may thus provide an objective tool for distinguishing functional hemianopia from hemifield loss caused by an organic lesion.

  9. Avoiding monocular artifacts in clinical stereotests presented on column-interleaved digital stereoscopic displays.

    Science.gov (United States)

    Serrano-Pedraza, Ignacio; Vancleef, Kathleen; Read, Jenny C A

    2016-11-01

    New forms of stereoscopic 3-D technology offer vision scientists new opportunities for research, but also come with distinct problems. Here we consider autostereo displays where the two eyes' images are spatially interleaved in alternating columns of pixels and no glasses or special optics are required. Column-interleaved displays produce an excellent stereoscopic effect, but subtle changes in the angle of view can increase cross talk or even interchange the left and right eyes' images. This creates several challenges to the presentation of cyclopean stereograms (containing structure which is only detectable by binocular vision). We discuss the potential artifacts, including one that is unique to column-interleaved displays, whereby scene elements such as dots in a random-dot stereogram appear wider or narrower depending on the sign of their disparity. We derive an algorithm for creating stimuli which are free from this artifact. We show that this and other artifacts can be avoided by (a) using a task which is robust to disparity-sign inversion-for example, a disparity-detection rather than discrimination task-(b) using our proposed algorithm to ensure that parallax is applied symmetrically on the column-interleaved display, and (c) using a dynamic stimulus to avoid monocular artifacts from motion parallax. In order to test our recommendations, we performed two experiments using a stereoacuity task implemented with a parallax-barrier tablet. Our results confirm that these recommendations eliminate the artifacts. We believe that these recommendations will be useful to vision scientists interested in running stereo psychophysics experiments using parallax-barrier and other column-interleaved digital displays.

  10. Comparison of Subjective Refraction under Binocular and Monocular Conditions in Myopic Subjects.

    Science.gov (United States)

    Kobashi, Hidenaga; Kamiya, Kazutaka; Handa, Tomoya; Ando, Wakako; Kawamorita, Takushi; Igarashi, Akihito; Shimizu, Kimiya

    2015-07-28

    To compare subjective refraction under binocular and monocular conditions, and to investigate the clinical factors affecting the difference in spherical refraction between the two conditions. We examined thirty eyes of 30 healthy subjects. Binocular and monocular refraction without cycloplegia was measured through circular polarizing lenses in both eyes, using the Landolt-C chart of the 3D visual function trainer-ORTe. Stepwise multiple regression analysis was used to assess the relations among several pairs of variables and the difference in spherical refraction in binocular and monocular conditions. Subjective spherical refraction in the monocular condition was significantly more myopic than that in the binocular condition (p refraction (p = 0.99). The explanatory variable relevant to the difference in spherical refraction between binocular and monocular conditions was the binocular spherical refraction (p = 0.032, partial regression coefficient B = 0.029) (adjusted R(2) = 0.230). No significant correlation was seen with other clinical factors. Subjective spherical refraction in the monocular condition was significantly more myopic than that in the binocular condition. Eyes with higher degrees of myopia are more predisposed to show the large difference in spherical refraction between these two conditions.

  11. Enhanced monocular visual odometry integrated with laser distance meter for astronaut navigation.

    Science.gov (United States)

    Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin

    2014-03-11

    Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method.

  12. 3D laptop for defense applications

    Science.gov (United States)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  13. New stereo precise matching method for 3D surface measurement with computer vision%一种用于三维曲面视觉测量的立体精匹配方法

    Institute of Scientific and Technical Information of China (English)

    张爱武; 李明哲; 胡少兴

    2001-01-01

    In this paper, a new stereo precise matching method for 3D surface mea surement with computer vision is proposed. Using projective grating, distortion strips are created on the surface. Strip edges are detected with wavelet trans form, and cubic B spline is applied to interpolate strip edges that have bee n detected, and corresponding points are exactly matched based on epipolar const raint. The matching in the cameras of converging optical-axis can be solved su ccesfuly with the method.%根据立体视觉原理,针对三维曲面视觉测量的实际情况,提出一种基于空间不变量的立体精 匹配方法。 首先利用投影光栅,在曲面上形成变形条纹,采用小波变换检测出像素级的离 散边缘点;然后用三次B样条将条纹边缘插值成光顺曲线,根据对极几何特性,实现像 素级的精匹配,解决了交向摆放姿态的双目立体视觉系统的匹配问题。

  14. CAMERA CALIBRATION METHOD BASED ON VIRTUAL 3 D TARGET IN THE VISION SENSOR%视觉传感器中摄像机的虚拟三维靶标标定法

    Institute of Scientific and Technical Information of China (English)

    吴斌; 叶声华; 周富强; 杨学友; 盛玉

    2001-01-01

    采用二维平面靶标自由移动,构建虚拟三维靶标,实现摄像机内部参数的标定.该方法可以保证摄像机在固定之后进行标定,使摄像机的标定状态和工作状态一致,降低了标定设备的成本,简化了标定过程,提高了标定效率.实验结果表明,该方法切实可行.%A new method for constructing a virtual 3 D target used for camera calibration with a 2-D coplanar target is proposed in this paper.Without fixing the motion of the calibration target,all the parameters can be readily calibrated after the camera has been mounted.Therefore,it is guaranteed that the calibration state is identical to the working state of the camera.The proposed approach greatly reduces the cost of calibration equipment and increases the calibrating efficiency.Experiments show that this method is practical in the vision inspection applications.

  15. EUROPEANA AND 3D

    Directory of Open Access Journals (Sweden)

    D. Pletinckx

    2012-09-01

    Full Text Available The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  16. Perspectives on Materials Science in 3D

    DEFF Research Database (Denmark)

    Juul Jensen, Dorte

    2016-01-01

    Materials characterization in 3D has opened a new era in materials science, which is discussed in this paper. The original motivations and visions behind the development of one of the new 3D techniques, namely the three dimensional x-ray diffraction (3DXRD) method, are presented and the route...... to its implementation is described. The present status of materials science in 3D is illustrated by examples related to recrystallization. Finally, challenges and suggestions for the future success for 3D Materials Science relating to hardware evolution, data analysis, data exchange and modeling...

  17. Stereo vision based SLAM using Rao-Blackwellised particle filter

    Institute of Scientific and Technical Information of China (English)

    Er-yong WU; Gong-yan LI; Zhi-yu XIANG; Ji-lin LIU

    2008-01-01

    We present an algorithm which can realize 3D stereo vision simultaneous localization and mapping (SLAM) for mobile robot in unknown outdoor environments, which means the 6-DOF motion and a sparse but persistent map of natural landmarks be constructed online only with a stereo camera. In mobile robotics research, we extend FastSLAM 2.0 like stereo vision SLAM with "pure vision" domain to outdoor environments. Unlike popular stochastic motion model used in conventional monocular vision SLAM, we utilize the ideas of structure from motion (SFM) for initial motion estimation, which is more suitable for the robot moving in large-scale outdoor, and textured environments. SIFT features are used as natural landmarks, and its 3D positions are constructed directly through triangulation. Considering the computational complexity and memory consumption,Bkd-tree and Best-Bin-First (BBF) search strategy are utilized for SIFT feature descriptor matching. Results show high accuracy of our algorithm, even in the circumstance of large translation and large rotation movements.

  18. 2D-to-3D conversion by using visual attention analysis

    Science.gov (United States)

    Kim, Jiwon; Baik, Aron; Jung, Yong Ju; Park, Dusik

    2010-02-01

    This paper proposes a novel 2D-to-3D conversion system based on visual attention analysis. The system was able to generate stereoscopic video from monocular video in a robust manner with no human intervention. According to our experiment, visual attention information can be used to provide rich 3D experience even when depth cues from monocular view are not enough. Using the algorithm introduced in the paper, 3D display users can watch 2D media in 3D. In addition, the algorithm can be embedded into 3D displays in order to deliver better viewing experience with more immersive feeling. Using visual attention information to give a 3D effect is first tried in this research as far as we know.

  19. Teleoperation of a Team of Robots with Vision

    Science.gov (United States)

    2010-11-01

    of five to fifty monocular mobile robots that are jointly controlled by a single user with a joystick. Each robot communicates with nearby robots...effort, we focused on the image sensing opportunities provided by such a team of monocular mobile robots and the computer vision capabilities required to

  20. Multi-view passive 3D face acquisition device

    NARCIS (Netherlands)

    Spreeuwers, L.J.

    2008-01-01

    Approaches to acquisition of 3D facial data include laser scanners, structured light devices and (passive) stereo vision. The laser scanner and structured light methods allow accurate reconstruction of the 3D surface but strong light is projected on the faces of subjects. Passive stereo vision based

  1. IZDELAVA TISKALNIKA 3D

    OpenAIRE

    Brdnik, Lovro

    2015-01-01

    Diplomsko delo analizira trenutno stanje 3D tiskalnikov na trgu. Prikazan je razvoj in principi delovanja 3D tiskalnikov. Predstavljeni so tipi 3D tiskalnikov, njihove prednosti in slabosti. Podrobneje je predstavljena zgradba in delovanje koračnih motorjev. Opravljene so meritve koračnih motorjev. Opisana je programska oprema za rokovanje s 3D tiskalniki in komponente, ki jih potrebujemo za izdelavo. Diploma se oklepa vprašanja, ali je izdelava 3D tiskalnika bolj ekonomična kot pa naložba v ...

  2. Pentingnya Pengetahuan Anatomi untuk 3D Artist

    Directory of Open Access Journals (Sweden)

    Anton Sugito Kurniawan

    2011-03-01

    Full Text Available No matter how far the current technological advances, anatomical knowledge will still be needed as a basis for making a good character design. Understanding anatomy will help us in the placement of the articulation of muscles and joints, thus more realistic modeling of 3d characters will be achieved in the form and movement. As a 3d character artist, anatomy should be able to inform in every aspect of our work. Each 3D/CG (Computer Graphics-artist needs to know how to use software applications, but what differentiates a 3d artist with a computer operator is an artistic vision and understanding of the basic shape of the human body. Artistic vision could not easily be taught, but a CG-artist may study it on their own from which so many reference sources may help understand and deepen their knowledge of anatomy.

  3. Pentingnya Pengetahuan Anatomi Untuk 3D Artist

    Directory of Open Access Journals (Sweden)

    Anton Sugito Kurniawan

    2011-04-01

    Full Text Available No matter how far the current technological advances, anatomical knowledge will still be needed as a basis for making a good character design. Understanding anatomy will help us in the placement of the articulation of muscles and joints, thus more realistic modeling of 3d characters will be achieved in the form and movement. As a 3d character artist, anatomy should be able to inform in every aspect of our work. Each 3D/CG (Computer Graphics-artist needs to know how to use software applications, but what differentiates a 3d artist with a computer operator is an artistic vision and understanding of the basic shape of the human body. Artistic vision could not easily be taught, but a CG-artist may study it on their own from which so many reference sources may help understand and deepen their knowledge of anatomy.  

  4. 3D and Education

    Science.gov (United States)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  5. 3D Reconstruction of NMR Images

    Directory of Open Access Journals (Sweden)

    Peter Izak

    2007-01-01

    Full Text Available This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  6. TEHNOLOGIJE 3D TISKALNIKOV

    OpenAIRE

    Kolar, Nataša

    2016-01-01

    Diplomsko delo predstavi razvoj tiskanja skozi čas. Podrobneje so opisani 3D tiskalniki, ki uporabljajo različne tehnologije 3D tiskanja. Predstavljene so različne tehnologije 3D tiskanja, njihova uporaba in narejeni prototipi oz. končni izdelki. Diplomsko delo opiše celoten postopek, od zamisli, priprave podatkov in tiskalnika do izdelave prototipa oz. končnega izdelka.

  7. Virtual 3D camera' s modeling and roaming control%三维虚拟摄像机的建模及其漫游控制方法

    Institute of Scientific and Technical Information of China (English)

    闫志远; 吴冬梅; 鲍义东; 杜志江

    2013-01-01

    A monocular virtual camera model was described using the vector description method. Based on this, the parameters of a 3D binocular virtual camera and their constraining relations were determined according to the binocular vision theorem, and then a 3D virtual camera model which contains two monocular virtual cameras was proposed. According to the 3D virtual camera model,a roaming control method for binocular virtual cameras was proposed based on robot kinematics to solve problems of viewing angle limitations and 3D imaging distortion caused by the fact that 3D observation parameters are difficult to change in common virtual reality. The experimental results show that the 3D virtual camera model and the roaming control model can be effectively used to observe virtual reality environments and make the parameters adjusted in real time based upon observation demand, which means that the shift from passive virtual 3D to interactive virtual 3D is of great significance to the improvement of 3D observation in virtual reality.%利用矢量描述方法表达了虚拟现实(VR)中的虚拟单目摄像机模型,在此基础上,基于双目视差形成三维视觉的原理确定虚拟现实环境中双目三维摄像的控制参数及其约束关系,进而提出了包含两个单目虚拟摄像机的三维虚拟摄像机整体模型.针对该模型提出了一种基于机器人运动学方法的双目虚拟摄像机漫游控制方法,解决了虚拟现实三维观察参数难以实时改变而导致的视角局限和三维失真等问题.实验表明通过该三维虚拟摄像机可获取高质量三维效果,提出的漫游控制方法通过模型的自动运算和参数补偿可实时主动地获取任意方向的三维景深.这种由被动式虚拟三维向交互性虚拟三维的跨越对改进虚拟现实的沉浸感具有重要意义.

  8. Moving target geolocation for micro air vehicles based on monocular vision%基于单目视觉的微型飞行器移动目标定位方法

    Institute of Scientific and Technical Information of China (English)

    郭力; 昂海松; 郑祥明

    2012-01-01

    针对目标在地形高度未知环境中移动的情况,给出一种利用微型飞行器机载单目摄像机进行目标定位的方法.首先,借助光流直方图从当前图像帧中提取出移动目标局部区域内的背景特征点;然后,结合机载微机电系统(micro electro mechanical system,MEMS)/全球定位系统(global positioning system,GPS)传感器测量的飞行器位姿和空间平面点成像的单应变换关系,在期望值最大化算法中将背景特征点分类为辅助平面点和非辅助平面点,并估计辅助平面到摄像机光心的距离参数和法矢量参数,从而确定移动目标所处辅助平面的空间平面方程;最后,联立求解目标视线方程和辅助平面方程获得交点坐标,转换到惯性系下完成移动目标的地理定位.实验结果表明,当微型飞行器飞行高度为100 m时,操作人员单次点击移动目标的定位误差在15 m以内.%Aiming at the movement of the targets in unknown altitude terrain, a monocular camera based target geolocation method for micro air vehicles (MAV) is presented. Firstly, the optical flow histgram algorithm extracts background features in the target's local region. Secondly, these features are clustered into two possible classes including aided plane features and non-aided plane features by the expectation maximization algorithm, in which the homography relationship between MAV's flight status measured by onboard micro electro mechanical systems (MEMS)/ global positioning system (GPS) sensors and planar is used. Meanwile, the normal vector of aided plane and the distance between the camera and the plane are estimated. Then the aided plane equation can be establised. Finally, the moving taregt can be geolocated by calculating the intersection of target's sight line and aided plane in inertial frame. Experimental results show that this method can instantaneously geolocate the moving target by operator's single click and the error can reach less than

  9. Loop Closure Detection Algorithm Based on Monocular Vision Using Visual Dictionary%基于视觉词典的单目视觉闭环检测算法

    Institute of Scientific and Technical Information of China (English)

    梁志伟; 陈燕燕; 朱松豪; 高翔; 徐国政

    2013-01-01

    Aiming at the problem of loop closure detection in monocular simultaneous localization and mapping for mobile robots,a detection algorithm based on visual dictionary (VD) is presented.Firstly,feature extraction is performed for each required image using SURF methods.Subsequently,a fuzzy K-means algorithm is employed to cluster these visual feature vectors into visual words based on VD which is constructed online.To precisely represent the similarities between each visual word and corresponding local visual features,Gaussian mixture model is proposed to learn the probability model of every visual word in bags of visual words.Consequently,every image can be denoted as a probabilistic vector of VD,and thus the similarities between any two images can be computed based on vector inner product.To guarantee the continuity of the closed-loop detection,a Bayesian filter method is applied to fuse historical closed-loop detection information and the obtained similarities to calculate the posterior probability distribution of closed-loop hypothesis.Furthermore,two memory management mechanisms,shallow memory and deep memory,are introduced to improve the process speed of the proposed algorithm.The experimental results demonstrate the validity of the proposed approach.%针对移动机器人单目视觉同步定位与地图构建中的闭环检测问题,文中设计一种基于视觉词典的闭环检测算法.算法对采集的每帧图像通过SURF进行特征提取,应用模糊K均值算法对检测的视觉特征向量进行分类,在线构建表征图像的视觉词典.为精确表征局部视觉特征与视觉单词间的相似关联,利用混合高斯模型建立视觉词典中的每一视觉单词的概率模型,实现图像基于视觉词典的概率向量表示,通过向量的内积来计算图像间的相似度.为保证闭环检测的成功率,应用贝叶斯滤波融合历史闭环检测与相似度信息来计算闭环假设的后验概率分布.另外,引入浅层

  10. 3D virtuel udstilling

    DEFF Research Database (Denmark)

    Tournay, Bruno; Rüdiger, Bjarne

    2006-01-01

    3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s.......3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s....

  11. Simulation Platform for Vision Aided Inertial Navigation

    Science.gov (United States)

    2014-09-18

    canyons, indoors or underground. It is also possible for a GPS signal to be jammed. This weakness motivates the development of alternate navigation ...Johnson, E. N., Magree, D., Wu, A., & Shein, A. (2013). "GPS‐Denied Indoor and Outdoor Monocular Vision Aided Navigation and Control of Unmanned...SIMULATION PLATFORM FOR VISION AIDED INERTIAL NAVIGATION THESIS SEPTEMBER 2014 Jason Gek

  12. Depth scaling in phantom and monocular gap stereograms using absolute distance information.

    Science.gov (United States)

    Kuroki, Daiichiro; Nakamizo, Sachio

    2006-11-01

    The present study aimed to investigate whether the visual system scales apparent depth from binocularly unmatched features by using absolute distance information. In Experiment 1 we examined the effect of convergence on perceived depth in phantom stereograms [Gillam, B., & Nakayama, K. (1999). Quantitative depth for a phantom surface can be based on cyclopean occlusion cues alone. Vision Research, 39, 109-112.], monocular gap stereograms [Pianta, M. J., & Gillam, B. J. (2003a). Monocular gap stereopsis: manipulation of the outer edge disparity and the shape of the gap. Vision Research, 43, 1937-1950.] and random dot stereograms. In Experiments 2 and 3 we examined the effective range of viewing distances for scaling the apparent depths in these stereograms. The results showed that: (a) the magnitudes of perceived depths increased in all stereograms as the estimate of the viewing distance increased while keeping proximal and/or distal sizes of the stimuli constant, and (b) the effective range of viewing distances was significantly shorter in monocular gap stereograms. The first result indicates that the visual system scales apparent depth from unmatched features as well as that from horizontal disparity, while the second suggests that, at far distances, the strength of the depth signal from an unmatched feature in monocular gap stereograms is relatively weaker than that from horizontal disparity.

  13. P2-1: Visual Short-Term Memory Lacks Sensitivity to Stereoscopic Depth Changes but is Much Sensitive to Monocular Depth Changes

    Directory of Open Access Journals (Sweden)

    Hae-In Kang

    2012-10-01

    Full Text Available Depth from both binocular disparity and monocular depth cues presumably is one of most salient features that would characterize a variety of visual objects in our daily life. Therefore it is plausible to expect that human vision should be good at perceiving objects' depth change arising from binocular disparities and monocular pictorial cues. However, what if the estimated depth needs to be remembered in visual short-term memory (VSTM rather than just perceived? In a series of experiments, we asked participants to remember depth of items in an array at the beginning of each trial. A set of test items followed after the memory array, and the participants were asked to report if one of the items in the test array have changed its depth from the remembered items or not. The items would differ from each other in three different depth conditions: (1 stereoscopic depth under binocular disparity manipulations, (2 monocular depth under pictorial cue manipulations, and (3 both stereoscopic and monocular depth. The accuracy of detecting depth change was substantially higher in the monocular condition than in the binocular condition, and the accuracy in the both-depth condition was moderately improved compared to the monocular condition. These results indicate that VSTM benefits more from monocular depth than stereoscopic depth, and further suggests that storage of depth information into VSTM would require both binocular and monocular information for its optimal memory performance.

  14. Blender 3D cookbook

    CERN Document Server

    Valenza, Enrico

    2015-01-01

    This book is aimed at the professionals that already have good 3D CGI experience with commercial packages and have now decided to try the open source Blender and want to experiment with something more complex than the average tutorials on the web. However, it's also aimed at the intermediate Blender users who simply want to go some steps further.It's taken for granted that you already know how to move inside the Blender interface, that you already have 3D modeling knowledge, and also that of basic 3D modeling and rendering concepts, for example, edge-loops, n-gons, or samples. In any case, it'

  15. Monocular Road Detection Using Structured Random Forest

    Directory of Open Access Journals (Sweden)

    Liang Xiao

    2016-05-01

    Full Text Available Road detection is a key task for autonomous land vehicles. Monocular vision-based road detection algorithms are mostly based on machine learning approaches and are usually cast as classification problems. However, the pixel-wise classifiers are faced with the ambiguity caused by changes in road appearance, illumination and weather. An effective way to reduce the ambiguity is to model the contextual information with structured learning and prediction. Currently, the widely used structured prediction model in road detection is the Markov random field or conditional random field. However, the random field-based methods require additional complex optimization after pixel-wise classification, making them unsuitable for real-time applications. In this paper, we present a structured random forest-based road-detection algorithm which is capable of modelling the contextual information efficiently. By mapping the structured label space to a discrete label space, the test function of each split node can be trained in a similar way to that of the classical random forests. Structured random forests make use of the contextual information of image patches as well as the structural information of the labels to get more consistent results. Besides this benefit, by predicting a batch of pixels in a single classification, the structured random forest-based road detection can be much more efficient than the conventional pixel-wise random forest. Experimental results tested on the KITTI-ROAD dataset and data collected in typical unstructured environments show that structured random forest-based road detection outperforms the classical pixel-wise random forest both in accuracy and efficiency.

  16. 3D Digital Modelling

    DEFF Research Database (Denmark)

    Hundebøl, Jesper

    wave of new building information modelling tools demands further investigation, not least because of industry representatives' somewhat coarse parlance: Now the word is spreading -3D digital modelling is nothing less than a revolution, a shift of paradigm, a new alphabet... Research qeustions. Based...... on empirical probes (interviews, observations, written inscriptions) within the Danish construction industry this paper explores the organizational and managerial dynamics of 3D Digital Modelling. The paper intends to - Illustrate how the network of (non-)human actors engaged in the promotion (and arrest) of 3......D Modelling (in Denmark) stabilizes - Examine how 3D Modelling manifests itself in the early design phases of a construction project with a view to discuss the effects hereof for i.a. the management of the building process. Structure. The paper introduces a few, basic methodological concepts...

  17. DELTA 3D PRINTER

    Directory of Open Access Journals (Sweden)

    ȘOVĂILĂ Florin

    2016-07-01

    Full Text Available 3D printing is a very used process in industry, the generic name being “rapid prototyping”. The essential advantage of a 3D printer is that it allows the designers to produce a prototype in a very short time, which is tested and quickly remodeled, considerably reducing the required time to get from the prototype phase to the final product. At the same time, through this technique we can achieve components with very precise forms, complex pieces that, through classical methods, could have been accomplished only in a large amount of time. In this paper, there are presented the stages of a 3D model execution, also the physical achievement after of a Delta 3D printer after the model.

  18. Validation of Data Association for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-01-01

    Full Text Available Simultaneous Mapping and Localization (SLAM is a multidisciplinary problem with ramifications within several fields. One of the key aspects for its popularity and success is the data fusion produced by SLAM techniques, providing strong and robust sensory systems even with simple devices, such as webcams in Monocular SLAM. This work studies a novel batch validation algorithm, the highest order hypothesis compatibility test (HOHCT, against one of the most popular approaches, the JCCB. The HOHCT approach has been developed as a way to improve performance of the delayed inverse-depth initialization monocular SLAM, a previously developed monocular SLAM algorithm based on parallax estimation. Both HOHCT and JCCB are extensively tested and compared within a delayed inverse-depth initialization monocular SLAM framework, showing the strengths and costs of this proposal.

  19. Professional Papervision3D

    CERN Document Server

    Lively, Michael

    2010-01-01

    Professional Papervision3D describes how Papervision3D works and how real world applications are built, with a clear look at essential topics such as building websites and games, creating virtual tours, and Adobe's Flash 10. Readers learn important techniques through hands-on applications, and build on those skills as the book progresses. The companion website contains all code examples, video step-by-step explanations, and a collada repository.

  20. AE3D

    Energy Technology Data Exchange (ETDEWEB)

    2016-06-20

    AE3D solves for the shear Alfven eigenmodes and eigenfrequencies in a torodal magnetic fusion confinement device. The configuration can be either 2D (e.g. tokamak, reversed field pinch) or 3D (e.g. stellarator, helical reversed field pinch, tokamak with ripple). The equations solved are based on a reduced MHD model and sound wave coupling effects are not currently included.

  1. Hierarchical online appearance-based tracking for 3D head pose, eyebrows, lips, eyelids, and irises

    NARCIS (Netherlands)

    Orozco, Javier; Rudovic, Ognjen; Gonzalez Garcia, Jordi; Pantic, Maja

    2013-01-01

    In this paper, we propose an On-line Appearance-Based Tracker (OABT) for simultaneous tracking of 3D head pose, lips, eyebrows, eyelids and irises in monocular video sequences. In contrast to previously proposed tracking approaches, which deal with face and gaze tracking separately, our OABT can als

  2. A stabilized adaptive appearance changes model for 3D head tracking

    NARCIS (Netherlands)

    Zivkovic, Zoran; Heijden, van der Ferdinand; Williams, A.Denise

    2001-01-01

    A simple method is presented for 3D head pose estimation and tracking in monocular image sequences. A generic geometric model is used. The initialization consists of aligning the perspective projection of the geometric model with the subjects head in the initial image. After the initialization, the

  3. 单目视觉同步定位与地图创建方法综述%A survey of monocular simultaneous localization and mapping

    Institute of Scientific and Technical Information of China (English)

    顾照鹏; 刘宏

    2015-01-01

    随着计算机视觉技术的发展,基于单目视觉的同步定位与地图创建( monocular SLAM)逐渐成为计算机视觉领域的热点问题之一。介绍了单目视觉SLAM方法的分类,从视觉特征检测与匹配、数据关联的优化、特征点深度的获取、地图的尺度控制几个方面阐述了单目视觉SLAM研究的发展现状。最后,介绍了常见的单目视觉与其他传感器结合的SLAM方法,并探讨了单目视觉SLAM未来的研究方向。%With the development of computer vision technology, monocular simultaneous localization and mapping ( monocular SLAM) has gradually become one of the hot issues in the field of computer vision.This paper intro-duces the monocular vision SLAM classification that relates to the present status of research in monocular SLAM methods from several aspects, including visual feature detection and matching, optimization of data association, depth acquisition of feature points, and map scale control.Monocular SLAM methods combining with other sensors are reviewed and significant issues needing further study are discussed.

  4. X3d2pov. Traductor of X3D to POV-Ray

    Directory of Open Access Journals (Sweden)

    Andrea Castellanos Mendoza

    2011-01-01

    Full Text Available High-quality and low-quality interactive graphics represent two different approaches to computer graphics’ 3D object representation. The former is mainly used to produce high computational cost movie animation. The latter is used for producing interactive scenes as part of virtual reality environments. Many file format specifications have appeared to satisfy underlying model needs; POV-ray (persistence of vision is an open source specification for rendering photorealistic images with the ray tracer algorithm and X3D (extendable 3D as the VRML successor standard for producing web virtual-reality environments written in XML. X3D2POV has been introduced to render high-quality images from an X3D scene specification; it is a grammar translator tool from X3D code to POV-ray code.

  5. The Conformal Camera in Modeling Active Binocular Vision

    Directory of Open Access Journals (Sweden)

    Jacek Turski

    2016-08-01

    Full Text Available Primate vision is an active process that constructs a stable internal representation of the 3D world based on 2D sensory inputs that are inherently unstable due to incessant eye movements. We present here a mathematical framework for processing visual information for a biologically-mediated active vision stereo system with asymmetric conformal cameras. This model utilizes the geometric analysis on the Riemann sphere developed in the group-theoretic framework of the conformal camera, thus far only applicable in modeling monocular vision. The asymmetric conformal camera model constructed here includes the fovea’s asymmetric displacement on the retina and the eye’s natural crystalline lens tilt and decentration, as observed in ophthalmological diagnostics. We extend the group-theoretic framework underlying the conformal camera to the stereo system with asymmetric conformal cameras. Our numerical simulation shows that the theoretical horopter curves in this stereo system are conics that well approximate the empirical longitudinal horopters of the primate vision system.

  6. Human Pose Estimation from Monocular Images: A Comprehensive Survey

    Directory of Open Access Journals (Sweden)

    Wenjuan Gong

    2016-11-01

    Full Text Available Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing. Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used.

  7. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  8. 3DSEM: A 3D microscopy dataset

    Directory of Open Access Journals (Sweden)

    Ahmad P. Tafti

    2016-03-01

    Full Text Available The Scanning Electron Microscope (SEM as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples.

  9. 3D Projection Installations

    DEFF Research Database (Denmark)

    Halskov, Kim; Johansen, Stine Liv; Bach Mikkelsen, Michelle

    2014-01-01

    Three-dimensional projection installations are particular kinds of augmented spaces in which a digital 3-D model is projected onto a physical three-dimensional object, thereby fusing the digital content and the physical object. Based on interaction design research and media studies, this article...... contributes to the understanding of the distinctive characteristics of such a new medium, and identifies three strategies for designing 3-D projection installations: establishing space; interplay between the digital and the physical; and transformation of materiality. The principal empirical case, From...... Fingerplan to Loop City, is a 3-D projection installation presenting the history and future of city planning for the Copenhagen area in Denmark. The installation was presented as part of the 12th Architecture Biennale in Venice in 2010....

  10. 3D Spectroscopic Instrumentation

    CERN Document Server

    Bershady, Matthew A

    2009-01-01

    In this Chapter we review the challenges of, and opportunities for, 3D spectroscopy, and how these have lead to new and different approaches to sampling astronomical information. We describe and categorize existing instruments on 4m and 10m telescopes. Our primary focus is on grating-dispersed spectrographs. We discuss how to optimize dispersive elements, such as VPH gratings, to achieve adequate spectral resolution, high throughput, and efficient data packing to maximize spatial sampling for 3D spectroscopy. We review and compare the various coupling methods that make these spectrographs ``3D,'' including fibers, lenslets, slicers, and filtered multi-slits. We also describe Fabry-Perot and spatial-heterodyne interferometers, pointing out their advantages as field-widened systems relative to conventional, grating-dispersed spectrographs. We explore the parameter space all these instruments sample, highlighting regimes open for exploitation. Present instruments provide a foil for future development. We give an...

  11. Radiochromic 3D Detectors

    Science.gov (United States)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  12. Interaktiv 3D design

    DEFF Research Database (Denmark)

    Villaume, René Domine; Ørstrup, Finn Rude

    2002-01-01

    Projektet undersøger potentialet for interaktiv 3D design via Internettet. Arkitekt Jørn Utzons projekt til Espansiva blev udviklet som et byggesystem med det mål, at kunne skabe mangfoldige planmuligheder og mangfoldige facade- og rumudformninger. Systemets bygningskomponenter er digitaliseret som...... 3D elementer og gjort tilgængelige. Via Internettet er det nu muligt at sammenstille og afprøve en uendelig  række bygningstyper som  systemet blev tænkt og udviklet til....

  13. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... 6 years from prolonged viewing of the device's digital images, in order to avoid possible damage to ... clearly see the images when using 3-D digital products, this may indicate a vision or eye ...

  14. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... create the 3-D effect can confuse or overload the brain, causing some people discomfort even if ... Answers Free Newsletter Get ophthalmologist-reviewed tips and information about eye health and preserving your vision. Privacy ...

  15. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... function in children, nor are there persuasive, conclusive theories on how 3-D digital products could cause ... Answers Free Newsletter Get ophthalmologist-reviewed tips and information about eye health and preserving your vision. Privacy ...

  16. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... viewer has a problem with focusing or depth perception. Also, the techniques used to create the 3- ... or other conditions that persistently inhibit focusing, depth perception or normal 3-D vision, would have difficulty ...

  17. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... some people discomfort even if they have normal vision.  Taking a break from viewing usually relieves the discomfort. More on computer use and your eyes . Children and 3-D ...

  18. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... 6 years from prolonged viewing of the device's digital images, in order to avoid possible damage to ... clearly see the images when using 3-D digital products, this may indicate a vision or eye ...

  19. 3D Wire 2015

    DEFF Research Database (Denmark)

    Jordi, Moréton; F, Escribano; J. L., Farias

    This document is a general report on the implementation of gamification in 3D Wire 2015 event. As the second gamification experience in this event, we have delved deeply in the previous objectives (attracting public areas less frequented exhibition in previous years and enhance networking) and ha......, improves socialization and networking, improves media impact, improves fun factor and improves encouragement of the production team....

  20. Shaping 3-D boxes

    DEFF Research Database (Denmark)

    Stenholt, Rasmus; Madsen, Claus B.

    2011-01-01

    Enabling users to shape 3-D boxes in immersive virtual environments is a non-trivial problem. In this paper, a new family of techniques for creating rectangular boxes of arbitrary position, orientation, and size is presented and evaluated. These new techniques are based solely on position data...

  1. Tangible 3D Modelling

    DEFF Research Database (Denmark)

    Hejlesen, Aske K.; Ovesen, Nis

    2012-01-01

    This paper presents an experimental approach to teaching 3D modelling techniques in an Industrial Design programme. The approach includes the use of tangible free form models as tools for improving the overall learning. The paper is based on lecturer and student experiences obtained through facil...

  2. 3D photoacoustic imaging

    Science.gov (United States)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  3. A low cost PSD-based monocular motion capture system

    Science.gov (United States)

    Ryu, Young Kee; Oh, Choonsuk

    2007-10-01

    This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.

  4. Stereopsis has the edge in 3-D displays

    Science.gov (United States)

    Piantanida, T. P.

    The results of studies conducted at SRI International to explore differences in image requirements for depth and form perception with 3-D displays are presented. Monocular and binocular stabilization of retinal images was used to separate form and depth perception and to eliminate the retinal disparity input to stereopsis. Results suggest that depth perception is dependent upon illumination edges in the retinal image that may be invisible to form perception, and that the perception of motion-in-depth may be inhibited by form perception, and may be influenced by subjective factors such as ocular dominance and learning.

  5. Unoriented 3d TFTs

    CERN Document Server

    Bhardwaj, Lakshya

    2016-01-01

    This paper generalizes two facts about oriented 3d TFTs to the unoriented case. On one hand, it is known that oriented 3d TFTs having a topological boundary condition admit a state-sum construction known as the Turaev-Viro construction. This is related to the string-net construction of fermionic phases of matter. We show how Turaev-Viro construction can be generalized to unoriented 3d TFTs. On the other hand, it is known that the "fermionic" versions of oriented TFTs, known as Spin-TFTs, can be constructed in terms of "shadow" TFTs which are ordinary oriented TFTs with an anomalous Z_2 1-form symmetry. We generalize this correspondence to Pin+ TFTs by showing that they can be constructed in terms of ordinary unoriented TFTs with anomalous Z_2 1-form symmetry having a mixed anomaly with time-reversal symmetry. The corresponding Pin+ TFT does not have any anomaly for time-reversal symmetry however and hence it can be unambiguously defined on a non-orientable manifold. In case a Pin+ TFT admits a topological bou...

  6. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... 3-D digital products on eye and visual development, health, or function in children, nor are there persuasive, conclusive theories on how ... cause damage in children with healthy eyes. The development of normal 3-D vision in children is stimulated as they use their eyes in ...

  7. [3D in laparoscopy: state of the art].

    Science.gov (United States)

    Kunert, W; Storz, P; Müller, S; Axt, S; Kirschniak, A

    2013-03-01

    High definition stereoscopic (3D) vision has been introduced into the operation theatre. This review exposes the optical and physiological background as well as the state of the art of 3D in laparoscopy. The distinguishing marks of 3D laparoscopes and monitors are listed and characteristics of stereoscopy, such as comfort zones and ghosting are explained. Suggestions for the practical use in the clinical routine should help to extract the best benefit possible from the new technology.

  8. 3D and beyond

    Science.gov (United States)

    Fung, Y. C.

    1995-05-01

    This conference on physiology and function covers a wide range of subjects, including the vasculature and blood flow, the flow of gas, water, and blood in the lung, the neurological structure and function, the modeling, and the motion and mechanics of organs. Many technologies are discussed. I believe that the list would include a robotic photographer, to hold the optical equipment in a precisely controlled way to obtain the images for the user. Why are 3D images needed? They are to achieve certain objectives through measurements of some objects. For example, in order to improve performance in sports or beauty of a person, we measure the form, dimensions, appearance, and movements.

  9. 3D-model building of the jaw impression

    Science.gov (United States)

    Ahmed, Moumen T.; Yamany, Sameh M.; Hemayed, Elsayed E.; Farag, Aly A.

    1997-03-01

    A novel approach is proposed to obtain a record of the patient's occlusion using computer vision. Data acquisition is obtained using intra-oral video cameras. The technique utilizes shape from shading to extract 3D information from 2D views of the jaw, and a novel technique for 3D data registration using genetic algorithms. The resulting 3D model can be used for diagnosis, treatment planning, and implant purposes. The overall purpose of this research is to develop a model-based vision system for orthodontics to replace traditional approaches. This system will be flexible, accurate, and will reduce the cost of orthodontic treatments.

  10. 3D Motion Parameters Determination Based on Binocular Sequence Images

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Exactly capturing three dimensional (3D) motion information of an object is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, a binocular vision system and a method for determining 3D motion parameters of an object from binocular sequence images are introduced. The main steps include camera calibration, the matching of motion and stereo images, 3D feature point correspondences and resolving the motion parameters. Finally, the experimental results of acquiring the motion parameters of the objects with uniform velocity and acceleration in the straight line based on the real binocular sequence images by the mentioned method are presented.

  11. 3D Surgical Simulation

    Science.gov (United States)

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  12. TOWARDS: 3D INTERNET

    Directory of Open Access Journals (Sweden)

    Ms. Swapnali R. Ghadge

    2013-08-01

    Full Text Available In today’s ever-shifting media landscape, it can be a complex task to find effective ways to reach your desired audience. As traditional media such as television continue to lose audience share, one venue in particular stands out for its ability to attract highly motivated audiences and for its tremendous growth potential the 3D Internet. The concept of '3D Internet' has recently come into the spotlight in the R&D arena, catching the attention of many people, and leading to a lot of discussions. Basically, one can look into this matter from a few different perspectives: visualization and representation of information, and creation and transportation of information, among others. All of them still constitute research challenges, as no products or services are yet available or foreseen for the near future. Nevertheless, one can try to envisage the directions that can be taken towards achieving this goal. People who take part in virtual worlds stay online longer with a heightened level of interest. To take advantage of that interest, diverse businesses and organizations have claimed an early stake in this fast-growing market. They include technology leaders such as IBM, Microsoft, and Cisco, companies such as BMW, Toyota, Circuit City, Coca Cola, and Calvin Klein, and scores of universities, including Harvard, Stanford and Penn State.

  13. Fiducial-based monocular 3D displacement measurement of breakwater armour unit models.

    CSIR Research Space (South Africa)

    Vieira, R

    2008-11-01

    Full Text Available This paper presents a fiducial-based approach to monitoring the movement of breakwater armour units in a model hall environment. Target symbols with known dimensions are attached to the physical models, allowing the recovery of three...

  14. Effects of lens distortion calibration patterns on the accuracy of monocular 3D measurements

    CSIR Research Space (South Africa)

    De Villiers, J

    2011-11-01

    Full Text Available A variety of lens distortion modelling techniques exist. Since they make use of different calibration metrics it is difficult to select one over the others. This work aims to compare lens distortion modelling techniques and calibration patterns in a...

  15. Large-scale monocular FastSLAM2.0 acceleration on an embedded heterogeneous architecture

    Science.gov (United States)

    Abouzahir, Mohamed; Elouardi, Abdelhafid; Bouaziz, Samir; Latif, Rachid; Tajer, Abdelouahed

    2016-12-01

    Simultaneous localization and mapping (SLAM) is widely used in many robotic applications and autonomous navigation. This paper presents a study of FastSLAM2.0 computational complexity based on a monocular vision system. The algorithm is intended to operate with many particles in a large-scale environment. FastSLAM2.0 was partitioned into functional blocks allowing a hardware software matching on a CPU-GPGPU-based SoC architecture. Performances in terms of processing time and localization accuracy were evaluated using a real indoor dataset. Results demonstrate that an optimized and efficient CPU-GPGPU partitioning allows performing accurate localization results and high-speed execution of a monocular FastSLAM2.0-based embedded system operating under real-time constraints.

  16. Disparity biasing in depth from monocular occlusions.

    Science.gov (United States)

    Tsirlin, Inna; Wilcox, Laurie M; Allison, Robert S

    2011-07-15

    Monocular occlusions have been shown to play an important role in stereopsis. Among other contributions to binocular depth perception, monocular occlusions can create percepts of illusory occluding surfaces. It has been argued that the precise location in depth of these illusory occluders is based on the constraints imposed by occlusion geometry. Tsirlin et al. (2010) proposed that when these constraints are weak, the depth of the illusory occluder can be biased by a neighboring disparity-defined feature. In the present work we test this hypothesis using a variety of stimuli. We show that when monocular occlusions provide only partial constraints on the magnitude of depth of the illusory occluders, the perceived depth of the occluders can be biased by disparity-defined features in the direction unrestricted by the occlusion geometry. Using this disparity bias phenomenon we also show that in illusory occluder stimuli where disparity information is present, but weak, most observers rely on disparity while some use occlusion information instead to specify the depth of the illusory occluder. Taken together our experiments demonstrate that in binocular depth perception disparity and monocular occlusion cues interact in complex ways to resolve perceptual ambiguity. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Does monocular visual space contain planes?

    NARCIS (Netherlands)

    Koenderink, J.J.; Albertazzi, L.; Doorn, A.J. van; Ee, R. van; Grind, W.A. van de; Kappers, A.M.L.; Lappin, J.S.; Norman, J.F.; Oomes, A.H.J.; Pas, S.F. te; Phillips, F.; Pont, S.C.; Richards, W.A.; Todd, J.T.; Verstraten, F.A.J.; Vries, S.C. de

    2010-01-01

    The issue of the existence of planes—understood as the carriers of a nexus of straight lines—in the monocular visual space of a stationary human observer has never been addressed. The most recent empirical data apply to binocular visual space and date from the 1960s (Foley, 1964). This appears to be

  18. 3D-kompositointi

    OpenAIRE

    Piirainen, Jere

    2015-01-01

    Opinnäytetyössä käydään läpi yleisimpiä 3D-kompositointiin liittyviä tekniikoita sekä kompositointiin käytettyjä ohjelmia ja liitännäisiä. Työssä esitellään myös kompositoinnin juuret 1800-luvun lopulta aina nykyaikaiseen digitaaliseen kompositointiin asti. Kompositointi on yksinkertaisimmillaan usean kuvan liittämistä saumattomasti yhdeksi uskottavaksi kokonaisuudeksi. Vaikka prosessi vaatii visuaalista silmää, vaatii se myös paljon teknistä osaamista. Tämän lisäksi perusymmärrys kamera...

  19. Shaping 3-D boxes

    DEFF Research Database (Denmark)

    Stenholt, Rasmus; Madsen, Claus B.

    2011-01-01

    Enabling users to shape 3-D boxes in immersive virtual environments is a non-trivial problem. In this paper, a new family of techniques for creating rectangular boxes of arbitrary position, orientation, and size is presented and evaluated. These new techniques are based solely on position data......, making them different from typical, existing box shaping techniques. The basis of the proposed techniques is a new algorithm for constructing a full box from just three of its corners. The evaluation of the new techniques compares their precision and completion times in a 9 degree-of-freedom (Do......F) docking experiment against an existing technique, which requires the user to perform the rotation and scaling of the box explicitly. The precision of the users' box construction is evaluated by a novel error metric measuring the difference between two boxes. The results of the experiment strongly indicate...

  20. Recovery of neurofilament following early monocular deprivation

    Directory of Open Access Journals (Sweden)

    Timothy P O'Leary

    2012-04-01

    Full Text Available A brief period of monocular deprivation in early postnatal life can alter the structure of neurons within deprived-eye-receiving layers of the dorsal lateral geniculate nucleus. The modification of structure is accompanied by a marked reduction in labeling for neurofilament, a protein that composes the stable cytoskeleton and that supports neuron structure. This study examined the extent of neurofilament recovery in monocularly deprived cats that either had their deprived eye opened (binocular recovery, or had the deprivation reversed to the fellow eye (reverse occlusion. The degree to which recovery was dependent on visually-driven activity was examined by placing monocularly deprived animals in complete darkness (dark rearing. The loss of neurofilament and the reduction of soma size caused by monocular deprivation were both ameliorated equally following either binocular recovery or reverse occlusion for 8 days. Though monocularly deprived animals placed in complete darkness showed recovery of soma size, there was a generalized loss of neurofilament labeling that extended to originally non-deprived layers. Overall, these results indicate that recovery of soma size is achieved by removal of the competitive disadvantage of the deprived eye, and occurred even in the absence of visually-driven activity. Recovery of neurofilament occurred when the competitive disadvantage of the deprived eye was removed, but unlike the recovery of soma size, was dependent upon visually-driven activity. The role of neurofilament in providing stable neural structure raises the intriguing possibility that dark rearing, which reduced overall neurofilament levels, could be used to reset the deprived visual system so as to make it more ameliorable with treatment by experiential manipulations.

  1. Efficient 3D scene modeling and mosaicing

    CERN Document Server

    Nicosevici, Tudor

    2013-01-01

    This book proposes a complete pipeline for monocular (single camera) based 3D mapping of terrestrial and underwater environments. The aim is to provide a solution to large-scale scene modeling that is both accurate and efficient. To this end, we have developed a novel Structure from Motion algorithm that increases mapping accuracy by registering camera views directly with the maps. The camera registration uses a dual approach that adapts to the type of environment being mapped.   In order to further increase the accuracy of the resulting maps, a new method is presented, allowing detection of images corresponding to the same scene region (crossovers). Crossovers then used in conjunction with global alignment methods in order to highly reduce estimation errors, especially when mapping large areas. Our method is based on Visual Bag of Words paradigm (BoW), offering a more efficient and simpler solution by eliminating the training stage, generally required by state of the art BoW algorithms.   Also, towards dev...

  2. A new method of 3D scene recognition from still images

    Science.gov (United States)

    Zheng, Li-ming; Wang, Xing-song

    2014-04-01

    Most methods of monocular visual three dimensional (3D) scene recognition involve supervised machine learning. However, these methods often rely on prior knowledge. Specifically, they learn the image scene as part of a training dataset. For this reason, when the sampling equipment or scene is changed, monocular visual 3D scene recognition may fail. To cope with this problem, a new method of unsupervised learning for monocular visual 3D scene recognition is here proposed. First, the image is made using superpixel segmentation based on the CIELAB color space values L, a, and b and on the coordinate values x and y of pixels, forming a superpixel image with a specific density. Second, a spectral clustering algorithm based on the superpixels' color characteristics and neighboring relationships was used to reduce the dimensions of the superpixel image. Third, the fuzzy distribution density functions representing sky, ground, and façade are multiplied with the segment pixels, where the expectations of these segments are obtained. A preliminary classification of sky, ground, and façade is generated in this way. Fourth, the most accurate classification images of sky, ground, and façade were extracted through the tier-1 wavelet sampling and Manhattan direction feature. Finally, a depth perception map is generated based on the pinhole imaging model and the linear perspective information of ground surface. Here, 400 images of Make3D Image data from the Cornell University website were used to test the algorithm. The experimental results showed that this unsupervised learning method provides a more effective monocular visual 3D scene recognition model than other methods.

  3. Multi-view and 3D deformable part models.

    Science.gov (United States)

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ).

  4. 3D medical collaboration technology to enhance emergency healthcare

    DEFF Research Database (Denmark)

    Welch, Gregory F; Sonnenwald, Diane H.; Fuchs, Henry

    2009-01-01

    of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype...... these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays...... system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare....

  5. Super deep 3D images from a 3D omnifocus video camera.

    Science.gov (United States)

    Iizuka, Keigo

    2012-02-20

    When using stereographic image pairs to create three-dimensional (3D) images, a deep depth of field in the original scene enhances the depth perception in the 3D image. The omnifocus video camera has no depth of field limitations and produces images that are in focus throughout. By installing an attachment on the omnifocus video camera, real-time super deep stereoscopic pairs of video images were obtained. The deeper depth of field creates a larger perspective image shift, which makes greater demands on the binocular fusion of human vision. A means of reducing the perspective shift without harming the depth of field was found.

  6. Intraoral 3D scanner

    Science.gov (United States)

    Kühmstedt, Peter; Bräuer-Burchardt, Christian; Munkelt, Christoph; Heinze, Matthias; Palme, Martin; Schmidt, Ingo; Hintersehr, Josef; Notni, Gunther

    2007-09-01

    Here a new set-up of a 3D-scanning system for CAD/CAM in dental industry is proposed. The system is designed for direct scanning of the dental preparations within the mouth. The measuring process is based on phase correlation technique in combination with fast fringe projection in a stereo arrangement. The novelty in the approach is characterized by the following features: A phase correlation between the phase values of the images of two cameras is used for the co-ordinate calculation. This works contrary to the usage of only phase values (phasogrammetry) or classical triangulation (phase values and camera image co-ordinate values) for the determination of the co-ordinates. The main advantage of the method is that the absolute value of the phase at each point does not directly determine the coordinate. Thus errors in the determination of the co-ordinates are prevented. Furthermore, using the epipolar geometry of the stereo-like arrangement the phase unwrapping problem of fringe analysis can be solved. The endoscope like measurement system contains one projection and two camera channels for illumination and observation of the object, respectively. The new system has a measurement field of nearly 25mm × 15mm. The user can measure two or three teeth at one time. So the system can by used for scanning of single tooth up to bridges preparations. In the paper the first realization of the intraoral scanner is described.

  7. Martian terrain - 3D

    Science.gov (United States)

    1997-01-01

    This area of terrain near the Sagan Memorial Station was taken on Sol 3 by the Imager for Mars Pathfinder (IMP). 3D glasses are necessary to identify surface detail.The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  8. 3D Printing and 3D Bioprinting in Pediatrics.

    Science.gov (United States)

    Vijayavenkataraman, Sanjairaj; Fuh, Jerry Y H; Lu, Wen Feng

    2017-07-13

    Additive manufacturing, commonly referred to as 3D printing, is a technology that builds three-dimensional structures and components layer by layer. Bioprinting is the use of 3D printing technology to fabricate tissue constructs for regenerative medicine from cell-laden bio-inks. 3D printing and bioprinting have huge potential in revolutionizing the field of tissue engineering and regenerative medicine. This paper reviews the application of 3D printing and bioprinting in the field of pediatrics.

  9. 3D printing for dummies

    CERN Document Server

    Hausman, Kalani Kirk

    2014-01-01

    Get started printing out 3D objects quickly and inexpensively! 3D printing is no longer just a figment of your imagination. This remarkable technology is coming to the masses with the growing availability of 3D printers. 3D printers create 3-dimensional layered models and they allow users to create prototypes that use multiple materials and colors.  This friendly-but-straightforward guide examines each type of 3D printing technology available today and gives artists, entrepreneurs, engineers, and hobbyists insight into the amazing things 3D printing has to offer. You'll discover methods for

  10. Monocular and binocular depth discrimination thresholds.

    Science.gov (United States)

    Kaye, S B; Siddiqui, A; Ward, A; Noonan, C; Fisher, A C; Green, J R; Brown, M C; Wareing, P A; Watt, P

    1999-11-01

    Measurement of stereoacuity at varying distances, by real or simulated depth stereoacuity tests, is helpful in the evaluation of patients with binocular imbalance or strabismus. Although the cue of binocular disparity underpins stereoacuity tests, there may be variable amounts of other binocular and monocular cues inherent in a stereoacuity test. In such circumstances, a combined monocular and binocular threshold of depth discrimination may be measured--stereoacuity conventionally referring to the situation where binocular disparity giving rise to retinal disparity is the only cue present. A child-friendly variable distance stereoacuity test (VDS) was developed, with a method for determining the binocular depth threshold from the combined monocular and binocular threshold of depth of discrimination (CT). Subjects with normal binocular function, reduced binocular function, and apparently absent binocularity were included. To measure the threshold of depth discrimination, subjects were required by means of a hand control to align two electronically controlled spheres at viewing distances of 1, 3, and 6m. Stereoacuity was also measured using the TNO, Frisby, and Titmus stereoacuity tests. BTs were calculated according to the function BT= arctan (1/tan alphaC - 1/tan alphaM)(-1), where alphaC and alphaM are the angles subtended at the nodal points by objects situated at the monocular threshold (alphaM) and the combined monocular-binocular threshold (alphaC) of discrimination. In subjects with good binocularity, BTs were similar to their combined thresholds, whereas subjects with reduced and apparently absent binocularity had binocular thresholds 4 and 10 times higher than their combined thresholds (CT). The VDS binocular thresholds showed significantly higher correlation and agreement with the TNO test and the binocular thresholds of the Frisby and Titmus tests, than the corresponding combined thresholds (p = 0.0019). The VDS was found to be an easy to use real depth

  11. 3D game environments create professional 3D game worlds

    CERN Document Server

    Ahearn, Luke

    2008-01-01

    The ultimate resource to help you create triple-A quality art for a variety of game worlds; 3D Game Environments offers detailed tutorials on creating 3D models, applying 2D art to 3D models, and clear concise advice on issues of efficiency and optimization for a 3D game engine. Using Photoshop and 3ds Max as his primary tools, Luke Ahearn explains how to create realistic textures from photo source and uses a variety of techniques to portray dynamic and believable game worlds.From a modern city to a steamy jungle, learn about the planning and technological considerations for 3D modelin

  12. Geopressure and Trap Integrity Predictions from 3-D Seismic Data: Case Study of the Greater Ughelli Depobelt, Niger Delta Pressions de pores et prévisions de l’intégrité des couvertures à partir de données sismiques 3D : le cas du grand sous-bassin d’Ughelli, Delta du Niger

    Directory of Open Access Journals (Sweden)

    Opara A.I.

    2012-05-01

    Full Text Available The deep drilling campaign in the Niger Delta has demonstrated the need for a detailed geopressure and trap integrity (drilling margin analysis as an integral and required step in prospect appraisal. Pre-drill pore pressure prediction from 3-D seismic data was carried out in the Greater Ughelli depobelt, Niger Delta basin to predict subsurface pressure regimes and further applied in the determination of hydrocarbon column height, reservoir continuity, fault seal and trap integrity. Results revealed that geopressured sedimentary formations are common within the more prolific deeper hydrocarbon reserves in the Niger Delta basin. The depth to top of mild geopressure (0.60 psi/ft ranges from about 10 000 ftss to over 30 000 ftss. The distribution of geopressures shows a well defined trend with depth to top of geopressures increasing towards the central part of the basin. This variation in the depth of top of geopressures in the area is believed to be related to faulting and shale diapirism, with top of geopressures becoming shallow with shale diapirism and deep with sedimentation. Post-depositional faulting is believed to have controlled the configuration of the geopressure surface and has played later roles in modifying the present day depth to top of geopressures. In general, geopressure in this area is often associated with simple rollover structures bounded by growth faults, especially at the hanging walls, while hydrostatic pressures were observed in areas with k-faults and collapsed crested structures. Les campagnes de forages profonds dans le delta du Niger ont démontré la nécessité d’une analyse détaillée des surpressions et de l’intégrité des structures pour évaluer correctement les prospects. La prédiction des pressions interstitielles a pu être réalisée ici avant forage à partir de données sismiques 3-D du grand sous-bassin d’Ughelli, dans le delta du Niger. Ce travail a permis de prévoir les régimes de pression du

  13. 3D Printing an Octohedron

    OpenAIRE

    Aboufadel, Edward F.

    2014-01-01

    The purpose of this short paper is to describe a project to manufacture a regular octohedron on a 3D printer. We assume that the reader is familiar with the basics of 3D printing. In the project, we use fundamental ideas to calculate the vertices and faces of an octohedron. Then, we utilize the OPENSCAD program to create a virtual 3D model and an STereoLithography (.stl) file that can be used by a 3D printer.

  14. Salient Local 3D Features for 3D Shape Retrieval

    CERN Document Server

    Godil, Afzal

    2011-01-01

    In this paper we describe a new formulation for the 3D salient local features based on the voxel grid inspired by the Scale Invariant Feature Transform (SIFT). We use it to identify the salient keypoints (invariant points) on a 3D voxelized model and calculate invariant 3D local feature descriptors at these keypoints. We then use the bag of words approach on the 3D local features to represent the 3D models for shape retrieval. The advantages of the method are that it can be applied to rigid as well as to articulated and deformable 3D models. Finally, this approach is applied for 3D Shape Retrieval on the McGill articulated shape benchmark and then the retrieval results are presented and compared to other methods.

  15. Machine Vision Perception of the Human Body 3-d Behavior Recognition Algorithms%机器视觉感知三维图像中的人体行为识别算法

    Institute of Scientific and Technical Information of China (English)

    韩雪; 齐园

    2013-01-01

    研究机器视觉感知中的人体行为准确识别问题.机器视觉感知中,采集的信息多为二维平面信息,在合成三维图像感知信息过程中,传统的因式分解合成法运用形状基数量固定,很难表达复杂行为特征,造成行为特征会出现一定的偏差,人体行为识别准确性不高.为了避免上述缺陷,提出了一种新的机器视觉感知中的人体三维行为识别算法.采集人体行为图像,并检测图像的轮廓区域,对检测区间间隙初始划分,通过把三维不定特征在三维空间进行空间映射,完成模糊性的消除,为人体三维行为识别提供数据基础.根据提取的消除模糊性后的人体三维行作为特征分量,对人体三维行为进行识别.实验结果表明,利用这种算法进行人体三维行为识别,能够准确的识别人体的行为,极大地提高了人体行为识别的准确率.%Study machine visual perception of human behavior accurate identification method.Machine visual perception,the information collected more for two dimensional plane information,resulted in three dimensional perception information loss,the human body movement characteristic point shape base number fixed,it is difficult to express complex behavior characteristics,cause the traditional based on two-dimensional shape base algorithm of human behavior recognition accuracy is not high.In order to avoid the above defects,this paper puts forward a machine visual perception of the human body 3 d behavior recognition algorithm.Acquisition human behavior image,and testing image contour area,for the human body 3 d behavior identity provide data base.Extraction human three-dimensional behavior characteristics component,the human body 3 d behavior for identification.The experimental resuits show that using this algorithm human three-dimensional behavior identity,can accurate identification of human behavior,greatly enhancing the human behavior recognition accuracy.

  16. Encoding of 3D Structure in the Visual Scene: A New Conceptualization

    Science.gov (United States)

    2013-03-01

    retinal  disparity...monocular   depth   cues   derived   from   the   retinal   images.   It   is   not   until   this   depth...with   plausible  neural  components  that  could  reside  in  a   locus  of  3D  reconstruction  in  the

  17. New approach to navigation: matching sequential images to 3D terrain maps

    Science.gov (United States)

    Zhang, Tianxu; Hu, Bo; Li, Wei

    1998-03-01

    In this paper an efficient image matching algorithm is presented for use in aircraft navigation. A sequence images with each two successive images partially overlapped is sensed by a monocular optical system. 3D undulation features are recovered from the image pairs, and then matched against a reference undulation feature map. Finally, the aircraft position is estimated by minimizing Hausdorff distance measure. The simulation experiment using real terrain data is reported.

  18. Quantitative perceived depth from sequential monocular decamouflage.

    Science.gov (United States)

    Brooks, K R; Gillam, B J

    2006-03-01

    We present a novel binocular stimulus without conventional disparity cues whose presence and depth are revealed by sequential monocular stimulation (delay > or = 80 ms). Vertical white lines were occluded as they passed behind an otherwise camouflaged black rectangular target. The location (and instant) of the occlusion event, decamouflaging the target's edges, differed in the two eyes. Probe settings to match the depth of the black rectangular target showed a monotonic increase with simulated depth. Control tests discounted the possibility of subjects integrating retinal disparities over an extended temporal window or using temporal disparity. Sequential monocular decamouflage was found to be as precise and accurate as conventional simultaneous stereopsis with equivalent depths and exposure durations.

  19. Monocular depth effects on perceptual fading.

    Science.gov (United States)

    Hsu, Li-Chuan; Kramer, Peter; Yeh, Su-Ling

    2010-08-06

    After prolonged viewing, a static target among moving non-targets is perceived to repeatedly disappear and reappear. An uncrossed stereoscopic disparity of the target facilitates this Motion-Induced Blindness (MIB). Here we test whether monocular depth cues can affect MIB too, and whether they can also affect perceptual fading in static displays. Experiment 1 reveals an effect of interposition: more MIB when the target appears partially covered by, than when it appears to cover, its surroundings. Experiment 2 shows that the effect is indeed due to interposition and not to the target's contours. Experiment 3 induces depth with the watercolor illusion and replicates Experiment 1. Experiments 4 and 5 replicate Experiments 1 and 3 without the use of motion. Since almost any stimulus contains a monocular depth cue, we conclude that perceived depth affects perceptual fading in almost any stimulus, whether dynamic or static. Copyright 2010 Elsevier Ltd. All rights reserved.

  20. Monocular discs in the occlusion zones of binocular surfaces do not have quantitative depth--a comparison with Panum's limiting case.

    Science.gov (United States)

    Gillam, Barbara; Cook, Michael; Blackburn, Shane

    2003-01-01

    Da Vinci stereopsis is defined as apparent depth seen in a monocular object laterally adjacent to a binocular surface in a position consistent with its occlusion by the other eye. It is widely regarded as a new form of quantitative stereopsis because the depth seen is quantitatively related to the lateral separation of the monocular element and the binocular surface (Nakayama and Shimojo 1990 Vision Research 30 1811-1825). This can be predicted on the basis that the more separated the monocular element is from the surface the greater its minimum depth behind the surface would have to be to account for its monocular occlusion. Supporting evidence, however, has used narrow bars as the monocular elements, raising the possibility that quantitative depth as a function of separation could be attributable to Panum's limiting case (double fusion) rather than to a new form of stereopsis. We compared the depth performance of monocular objects fusible with the edge of the surface in the contralateral eye (lines) and non-fusible objects (disks) and found that, although the fusible objects showed highly quantitative depth, the disks did not, appearing behind the surface to the same degree at all separations from it. These findings indicate that, although there is a crude sense of depth for discrete monocular objects placed in a valid position for uniocular occlusion, depth is not quantitative. They also indicate that Panum's limiting case is not, as has sometimes been claimed, itself a case of da Vinci stereopsis since fusibility is a critical factor for seeing quantitative depth in discrete monocular objects relative to a binocular surface.

  1. Using Ignorance in 3D Scene Understanding

    Directory of Open Access Journals (Sweden)

    Bogdan Harasymowicz-Boggio

    2014-01-01

    Full Text Available Awareness of its own limitations is a fundamental feature of the human sight, which has been almost completely omitted in computer vision systems. In this paper we present a method of explicitly using information about perceptual limitations of a 3D vision system, such as occluded areas, limited field of view, loss of precision along with distance increase, and imperfect segmentation for a better understanding of the observed scene. The proposed mechanism integrates metric and semantic inference using Dempster-Shafer theory, which makes it possible to handle observations that have different degrees and kinds of uncertainty. The system has been implemented and tested in a real indoor environment, showing the benefits of the proposed approach.

  2. Comparative evaluation of monocular augmented-reality display for surgical microscopes.

    Science.gov (United States)

    Rodriguez Palma, Santiago; Becker, Brian C; Lobes, Louis A; Riviere, Cameron N

    2012-01-01

    Medical augmented reality has undergone much development recently. However, there is a lack of studies quantitatively comparing the different display options available. This paper compares the effects of different graphical overlay systems in a simple micromanipulation task with "soft" visual servoing. We compared positioning accuracy in a real-time visually-guided task using Micron, an active handheld tremor-canceling microsurgical instrument, using three different displays: 2D screen, 3D screen, and microscope with monocular image injection. Tested with novices and an experienced vitreoretinal surgeon, display of virtual cues in the microscope via an augmented reality injection system significantly decreased 3D error (p < 0.05) compared to the 2D and 3D monitors when confounding factors such as magnification level were normalized.

  3. Sensing and compressing 3-D models

    Energy Technology Data Exchange (ETDEWEB)

    Krumm, J. [Sandia National Labs., Albuquerque, NM (United States). Intelligent System Sensors and Controls Dept.

    1998-02-01

    The goal of this research project was to create a passive and robust computer vision system for producing 3-D computer models of arbitrary scenes. Although the authors were unsuccessful in achieving the overall goal, several components of this research have shown significant potential. Of particular interest is the application of parametric eigenspace methods for planar pose measurement of partially occluded objects in gray-level images. The techniques presented provide a simple, accurate, and robust solution to the planar pose measurement problem. In addition, the representational efficiency of eigenspace methods used with gray-level features were successfully extended to binary features, which are less sensitive to illumination changes. The results of this research are presented in two papers that were written during the course of this project. The papers are included in sections 2 and 3. The first section of this report summarizes the 3-D modeling efforts.

  4. Monocular alignment in different depth planes.

    Science.gov (United States)

    Shimono, Koichi; Wade, Nicholas J

    2002-04-01

    We examined (a) whether vertical lines at different physical horizontal positions in the same eye can appear to be aligned, and (b), if so, whether the difference between the horizontal positions of the aligned vertical lines can vary with the perceived depth between them. In two experiments, each of two vertical monocular lines was presented (in its respective rectangular area) in one field of a random-dot stereopair with binocular disparity. In Experiment 1, 15 observers were asked to align a line in an upper area with a line in a lower area. The results indicated that when the lines appeared aligned, their horizontal physical positions could differ and the direction of the difference coincided with the type of disparity of the rectangular areas; this is not consistent with the law of the visual direction of monocular stimuli. In Experiment 2, 11 observers were asked to report relative depth between the two lines and to align them. The results indicated that the difference of the horizontal position did not covary with their perceived relative depth, suggesting that the visual direction and perceived depth of the monocular line are mediated via different mechanisms.

  5. An Analytical Measuring Rectification Algorithm of Monocular Systems in Dynamic Environment

    Directory of Open Access Journals (Sweden)

    Deshi Li

    2016-01-01

    Full Text Available Range estimation is crucial for maintaining a safe distance, in particular for vision navigation and localization. Monocular autonomous vehicles are appropriate for outdoor environment due to their mobility and operability. However, accurate range estimation using vision system is challenging because of the nonholonomic dynamics and susceptibility of vehicles. In this paper, a measuring rectification algorithm for range estimation under shaking conditions is designed. The proposed method focuses on how to estimate range using monocular vision when a shake occurs and the algorithm only requires the pose variations of the camera to be acquired. Simultaneously, it solves the problem of how to assimilate results from different kinds of sensors. To eliminate measuring errors by shakes, we establish a pose-range variation model. Afterwards, the algebraic relation between distance increment and a camera’s poses variation is formulated. The pose variations are presented in the form of roll, pitch, and yaw angle changes to evaluate the pixel coordinate incensement. To demonstrate the superiority of our proposed algorithm, the approach is validated in a laboratory environment using Pioneer 3-DX robots. The experimental results demonstrate that the proposed approach improves in the range accuracy significantly.

  6. Monocular blur alters the tuning characteristics of stereopsis for spatial frequency and size.

    Science.gov (United States)

    Li, Roger W; So, Kayee; Wu, Thomas H; Craven, Ashley P; Tran, Truyet T; Gustafson, Kevin M; Levi, Dennis M

    2016-09-01

    Our sense of depth perception is mediated by spatial filters at different scales in the visual brain; low spatial frequency channels provide the basis for coarse stereopsis, whereas high spatial frequency channels provide for fine stereopsis. It is well established that monocular blurring of vision results in decreased stereoacuity. However, previous studies have used tests that are broadband in their spatial frequency content. It is not yet entirely clear how the processing of stereopsis in different spatial frequency channels is altered in response to binocular input imbalance. Here, we applied a new stereoacuity test based on narrow-band Gabor stimuli. By manipulating the carrier spatial frequency, we were able to reveal the spatial frequency tuning of stereopsis, spanning from coarse to fine, under blurred conditions. Our findings show that increasing monocular blur elevates stereoacuity thresholds 'selectively' at high spatial frequencies, gradually shifting the optimum frequency to lower spatial frequencies. Surprisingly, stereopsis for low frequency targets was only mildly affected even with an acuity difference of eight lines on a standard letter chart. Furthermore, we examined the effect of monocular blur on the size tuning function of stereopsis. The clinical implications of these findings are discussed.

  7. Effect of ophthalmic filter thickness on predicted monocular dichromatic luminance and chromaticity discrimination.

    Science.gov (United States)

    Richer, S P; Little, A C; Adams, A J

    1984-11-01

    The majority of ophthalmic filters, whether they be in the form of spectacles or contact lenses, are absorbance type filters. Although color vision researchers routinely provide spectrophotometric transmission profiles of filters, filter thickness is rarely specified. In this paper, colorimetric tools and volume color theory are used to show that the color of a filter as well as its physical properties are altered dramatically by changes in thickness. The effect of changes in X-Chrom filter thickness on predicted monocular dichromatic luminance and chromaticity discrimination is presented.

  8. Random-Profiles-Based 3D Face Recognition System

    Directory of Open Access Journals (Sweden)

    Joongrock Kim

    2014-03-01

    Full Text Available In this paper, a noble nonintrusive three-dimensional (3D face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.

  9. Holography of 3d-3d correspondence at Large N

    OpenAIRE

    Gang, Dongmin; Kim, Nakwoo; Lee, Sangmin

    2014-01-01

    We study the physics of multiple M5-branes compactified on a hyperbolic 3-manifold. On the one hand, it leads to the 3d-3d correspondence which maps an N = 2 $$ \\mathcal{N}=2 $$ superconformal field theory to a pure Chern-Simons theory on the 3-manifold. On the other hand, it leads to a warped AdS 4 geometry in M-theory holographically dual to the superconformal field theory. Combining the holographic duality and the 3d-3d correspondence, we propose a conjecture for the large N limit of the p...

  10. Optical 3D shape measurement for dynamic process

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    3D shape dynamic measurement is essential to the study of machine vision, hydromechanics, high-speed rotation, deformation of material, stress analysis, deformation in impact, explosion process and biomedicine. in recent years. In this paper,the results of our research, including the theoretical analysis, some feasible methods and relevant verifying experiment results, are compendiously reported. At present, these results have been used in our assembling instruments for 3D shape measurement of dynamic process.

  11. Global Value Chains from a 3D Printing Perspective

    DEFF Research Database (Denmark)

    Laplume, André O; Petersen, Bent; Pearce, Joshua M.

    2016-01-01

    This article outlines the evolution of additive manufacturing technology, culminating in 3D printing and presents a vision of how this evolution is affecting existing global value chains (GVCs) in production. In particular, we bring up questions about how this new technology can affect...... of whether in some industries diffusion of 3D printing technologies may change the role of multinational enterprises as coordinators of GVCs by inducing the engagement of a wider variety of firms, even households....

  12. Global Value Chains from a 3D Printing Perspective

    DEFF Research Database (Denmark)

    Laplume, André O; Petersen, Bent; Pearce, Joshua M.

    2016-01-01

    This article outlines the evolution of additive manufacturing technology, culminating in 3D printing and presents a vision of how this evolution is affecting existing global value chains (GVCs) in production. In particular, we bring up questions about how this new technology can affect...... of whether in some industries diffusion of 3D printing technologies may change the role of multinational enterprises as coordinators of GVCs by inducing the engagement of a wider variety of firms, even households....

  13. 3D Reconstruction of NMR Images by LabVIEW

    Directory of Open Access Journals (Sweden)

    Peter IZAK

    2007-01-01

    Full Text Available This paper introduces the experiment of 3D reconstruction NMR images via virtual instrumentation - LabVIEW. The main idea is based on marching cubes algorithm and image processing implemented by module of Vision assistant. The two dimensional images shot by the magnetic resonance device provide information about the surface properties of human body. There is implemented algorithm which can be used for 3D reconstruction of magnetic resonance images in biomedical application.

  14. Insect stereopsis demonstrated using a 3D insect cinema

    OpenAIRE

    Vivek Nityananda; Ghaith Tarawneh; Ronny Rosner; Judith Nicolas; Stuart Crichton; Jenny Read

    2016-01-01

    Stereopsis - 3D vision – has become widely used as a model of perception. However, all our knowledge of possible underlying mechanisms comes almost exclusively from vertebrates. While stereopsis has been demonstrated for one invertebrate, the praying mantis, a lack of techniques to probe invertebrate stereopsis has prevented any further progress for three decades. We therefore developed a stereoscopic display system for insects, using miniature 3D glasses to present separate images to each ey...

  15. Monocular SLAM for Visual Odometry: A Full Approach to the Delayed Inverse-Depth Feature Initialization Method

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2012-01-01

    Full Text Available This paper describes in a detailed manner a method to implement a simultaneous localization and mapping (SLAM system based on monocular vision for applications of visual odometry, appearance-based sensing, and emulation of range-bearing measurements. SLAM techniques are required to operate mobile robots in a priori unknown environments using only on-board sensors to simultaneously build a map of their surroundings; this map will be needed for the robot to track its position. In this context, the 6-DOF (degree of freedom monocular camera case (monocular SLAM possibly represents the harder variant of SLAM. In monocular SLAM, a single camera, which is freely moving through its environment, represents the sole sensory input to the system. The method proposed in this paper is based on a technique called delayed inverse-depth feature initialization, which is intended to initialize new visual features on the system. In this work, detailed formulation, extended discussions, and experiments with real data are presented in order to validate and to show the performance of the proposal.

  16. 基于结构光和单CCD相机的物体表面三维测量%A New Method of Automatic Measurement for 3D Surface by Using Structured Light,Single CCD Camera

    Institute of Scientific and Technical Information of China (English)

    牟元英; 徐春斌

    2001-01-01

    Non-contact automatic measurement for 3D surface is one of central tasks of computer vision.Arounding this task ,a new method of automatic measurement for 3D surface by using structured light 、single CCD camera and double orientations techniques is proposed in this paper.The stereo coordinates of 3D surface can be obtained through a sequence monocular images without determining the relative position between camera (CCD) and structured light projector.%非接触式物体表面三维自动测量是计算机视觉领域的中心任务之一,围绕这个问题,提出了一种利用结构光,单CCD相机和双定向技术实现物体表面三维自动测量的新方法。采用该方法,无需测定相机和结构光光截面之间的相对位置,在单目序列影像上就可测量出物体表面的三维坐标。

  17. Monocular Elevation Deficiency - Double Elevator Palsy

    Science.gov (United States)

    ... Corneal Abrasions Dilating Eye Drops Lazy eye (defined) Pink eye (defined) Retinopathy of Prematurity Strabismus Stye (defined) Vision ... Corneal Abrasions Dilating Eye Drops Lazy eye (defined) Pink eye (defined) Retinopathy of Prematurity Strabismus Stye (defined) Vision ...

  18. 3D Spectroscopy in Astronomy

    Science.gov (United States)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  19. Spherical 3D isotropic wavelets

    Science.gov (United States)

    Lanusse, F.; Rassat, A.; Starck, J.-L.

    2012-04-01

    Context. Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D spherical Fourier-Bessel (SFB) analysis in spherical coordinates is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. Aims: The aim of this paper is to present a new formalism for a spherical 3D isotropic wavelet, i.e. one based on the SFB decomposition of a 3D field and accompany the formalism with a public code to perform wavelet transforms. Methods: We describe a new 3D isotropic spherical wavelet decomposition based on the undecimated wavelet transform (UWT) described in Starck et al. (2006). We also present a new fast discrete spherical Fourier-Bessel transform (DSFBT) based on both a discrete Bessel transform and the HEALPIX angular pixelisation scheme. We test the 3D wavelet transform and as a toy-application, apply a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and find we can successfully remove noise without much loss to the large scale structure. Results: We have described a new spherical 3D isotropic wavelet transform, ideally suited to analyse and denoise future 3D spherical cosmological surveys, which uses a novel DSFBT. We illustrate its potential use for denoising using a toy model. All the algorithms presented in this paper are available for download as a public code called MRS3D at http://jstarck.free.fr/mrs3d.html

  20. Surgical outcome in monocular elevation deficit: A retrospective interventional study

    Directory of Open Access Journals (Sweden)

    Bandyopadhyay Rakhi

    2008-01-01

    Full Text Available Background and Aim: Monocular elevation deficiency (MED is characterized by a unilateral defect in elevation, caused by paretic, restrictive or combined etiology. Treatment of this multifactorial entity is therefore varied. In this study, we performed different surgical procedures in patients of MED and evaluated their outcome, based on ocular alignment, improvement in elevation and binocular functions. Study Design: Retrospective interventional study. Materials and Methods: Twenty-eight patients were included in this study, from June 2003 to August 2006. Five patients underwent Knapp procedure, with or without horizontal squint surgery, 17 patients had inferior rectus recession, with or without horizontal squint surgery, three patients had combined inferior rectus recession and Knapp procedure and three patients had inferior rectus recession combined with contralateral superior rectus or inferior oblique surgery. The choice of procedure was based on the results of forced duction test (FDT. Results: Forced duction test was positive in 23 cases (82%. Twenty-four of 28 patients (86% were aligned to within 10 prism diopters. Elevation improved in 10 patients (36% from no elevation above primary position (-4 to only slight limitation of elevation (-1. Five patients had preoperative binocular vision and none gained it postoperatively. No significant postoperative complications or duction abnormalities were observed during the follow-up period. Conclusion: Management of MED depends upon selection of the correct surgical technique based on employing the results of FDT, for a satisfactory outcome.

  1. 3D IBFV : Hardware-Accelerated 3D Flow Visualization

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique for 2D flow visualization in two main directions. First, we decompose the 3D flow visualization problem in a

  2. 3D Elevation Program—Virtual USA in 3D

    Science.gov (United States)

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  3. A 3-D Contextual Classifier

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    1997-01-01

    . This includes the specification of a Gaussian distribution for the pixel values as well as a prior distribution for the configuration of class variables within the cross that is m ade of a pixel and its four nearest neighbours. We will extend this algorithm to 3-D, i.e. we will specify a simultaneous Gaussian...... distr ibution for a pixel and its 6 nearest 3-D neighbours, and generalise the class variable configuration distribution within the 3-D cross. The algorithm is tested on a synthetic 3-D multivariate dataset....

  4. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  5. Using 3D in Visualization

    DEFF Research Database (Denmark)

    Wood, Jo; Kirschenbauer, Sabine; Döllner, Jürgen

    2005-01-01

    to display 3D imagery. The extra cartographic degree of freedom offered by using 3D is explored and offered as a motivation for employing 3D in visualization. The use of VR and the construction of virtual environments exploit navigational and behavioral realism, but become most usefil when combined...... with abstracted representations embedded in a 3D space. The interactions between development of geovisualization, the technology used to implement it and the theory surrounding cartographic representation are explored. The dominance of computing technologies, driven particularly by the gaming industry...

  6. Interactive 3D multimedia content

    CERN Document Server

    Cellary, Wojciech

    2012-01-01

    The book describes recent research results in the areas of modelling, creation, management and presentation of interactive 3D multimedia content. The book describes the current state of the art in the field and identifies the most important research and design issues. Consecutive chapters address these issues. These are: database modelling of 3D content, security in 3D environments, describing interactivity of content, searching content, visualization of search results, modelling mixed reality content, and efficient creation of interactive 3D content. Each chapter is illustrated with example a

  7. 3D for Graphic Designers

    CERN Document Server

    Connell, Ellery

    2011-01-01

    Helping graphic designers expand their 2D skills into the 3D space The trend in graphic design is towards 3D, with the demand for motion graphics, animation, photorealism, and interactivity rapidly increasing. And with the meteoric rise of iPads, smartphones, and other interactive devices, the design landscape is changing faster than ever.2D digital artists who need a quick and efficient way to join this brave new world will want 3D for Graphic Designers. Readers get hands-on basic training in working in the 3D space, including product design, industrial design and visualization, modeling, ani

  8. 3-D printers for libraries

    CERN Document Server

    Griffey, Jason

    2014-01-01

    As the maker movement continues to grow and 3-D printers become more affordable, an expanding group of hobbyists is keen to explore this new technology. In the time-honored tradition of introducing new technologies, many libraries are considering purchasing a 3-D printer. Jason Griffey, an early enthusiast of 3-D printing, has researched the marketplace and seen several systems first hand at the Consumer Electronics Show. In this report he introduces readers to the 3-D printing marketplace, covering such topics asHow fused deposition modeling (FDM) printing workBasic terminology such as build

  9. Patterns of non-embolic transient monocular visual field loss.

    Science.gov (United States)

    Petzold, Axel; Islam, Niaz; Plant, G T

    2013-07-01

    The aim of this study was to systematically describe the semiology of non-embolic transient monocular visual field loss (neTMVL). We conducted a retrospective case note analysis of patients from Moorfields Eye Hospital (1995-2007). The variables analysed were age, age of onset, gender, past medical history or family history of migraine, eye affected, onset, duration and offset, perception (pattern, positive and negative symptoms), associated headache and autonomic symptoms, attack frequency, and treatment response to nifedipine. We identified 77 patients (28 male and 49 female). Mean age of onset was 37 years (range 14-77 years). The neTMVL was limited to the right eye in 36 % to the left in 47 % and occurred independently in either eye in 5 % of cases. A past medical history of migraine was present in 12 % and a family history in 8 %. Headache followed neTMVL in 14 % and was associated with autonomic features in 3 %. The neTMB was perceived as grey in 35 %, white in 21 %, black in 16 % and as phosphenes in 9 %. Most frequently neTMVL was patchy 20 %. Recovery of vision frequently resembled attack onset in reverse. In 3 patients without associated headache the loss of vision was permanent. Treatment with nifedipine was initiated in 13 patients with an attack frequency of more than one per week and reduced the attack frequency in all. In conclusion, this large series of patients with neTMVL permits classification into five types of reversible visual field loss (grey, white, black, phosphenes, patchy). Treatment response to nifidipine suggests some attacks to be caused by vasospasm.

  10. Panoramic stereo sphere vision

    Science.gov (United States)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  11. Parallel Processor for 3D Recovery from Optical Flow

    Directory of Open Access Journals (Sweden)

    Jose Hugo Barron-Zambrano

    2009-01-01

    Full Text Available 3D recovery from motion has received a major effort in computer vision systems in the recent years. The main problem lies in the number of operations and memory accesses to be performed by the majority of the existing techniques when translated to hardware or software implementations. This paper proposes a parallel processor for 3D recovery from optical flow. Its main feature is the maximum reuse of data and the low number of clock cycles to calculate the optical flow, along with the precision with which 3D recovery is achieved. The results of the proposed architecture as well as those from processor synthesis are presented.

  12. 3D Printing: Print the future of ophthalmology.

    Science.gov (United States)

    Huang, Wenbin; Zhang, Xiulan

    2014-08-26

    The three-dimensional (3D) printer is a new technology that creates physical objects from digital files. Recent technological advances in 3D printing have resulted in increased use of this technology in the medical field, where it is beginning to revolutionize medical and surgical possibilities. It is already providing medicine with powerful tools that facilitate education, surgical planning, and organ transplantation research. A good understanding of this technology will be beneficial to ophthalmologists. The potential applications of 3D printing in ophthalmology, both current and future, are explored in this article. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  13. Industrial vision

    DEFF Research Database (Denmark)

    Knudsen, Ole

    1998-01-01

    of an implementation in real production environments. The theory for projection of world points into images is concentrated upon the direct linear transformation (DLT), also called the Extended Pinhole model, and the stability of this method. A complete list of formulas for calculating all parameters in the model...... is present ed, and the variability of the parameters is examined and described. The concept of using CAD together with vision information is based on the fact that all items processed at OSS have an associated complete 3D CAD model that is accessible at all production states. This concept gives numerous...... possibilities for using vision in applications which otherwise would be very difficult to automate. The requirement for low tolerances in production is, despite the huge dimensions of the items involved, extreme. This fact makes great demands on the ability to do robust sub pixel estimation. A new method based...

  14. Human skeleton proportions from monocular data

    Institute of Scientific and Technical Information of China (English)

    PENG En; LI Ling

    2006-01-01

    This paper introduces a novel method for estimating the skeleton proportions ofa human figure from monocular data.The proposed system will first automatically extract the key frames and recover the perspective camera model from the 2D data.The human skeleton proportions are then estimated from the key frames using the recovered camera model without posture reconstruction. The proposed method is tested to be simple, fast and produce satisfactory results for the input data. The human model with estimated proportions can be used in future research involving human body modeling or human motion reconstruction.

  15. The effect of a monocular helmet-mounted display on aircrew health: a 10-year prospective cohort study of Apache AH MK 1 pilots: study midpoint update

    Science.gov (United States)

    Hiatt, Keith L.; Rash, Clarence E.; Watters, Raymond W.; Adams, Mark S.

    2009-05-01

    A collaborative occupational health study has been undertaken by Headquarters Army Aviation, Middle Wallop, UK, and the U.S. Army Aeromedical Research Laboratory, Fort Rucker, Alabama, to determine if the use of the Integrated Helmet and Display Sighting System (IHADSS) monocular helmet-mounted display (HMD) in the Apache AH Mk 1 attack helicopter has any long-term (10-year) effect on visual performance. The test methodology consists primarily of a detailed questionnaire and an annual battery of vision tests selected to capture changes in visual performance of Apache aviators over their flight career (with an emphasis on binocular visual function). Pilots using binocular night vision goggles serve as controls and undergo the same methodology. Currently, at the midpoint of the study, with the exception of a possible colour discrimination effect, there are no data indicating that the long-term use of the IHADSS monocular HMD results in negative effects on vision.

  16. More clinical observations on migraine associated with monocular visual symptoms in an Indian population

    Directory of Open Access Journals (Sweden)

    Vishal Jogi

    2016-01-01

    Full Text Available Context: Retinal migraine (RM is considered as one of the rare causes of transient monocular visual loss (TMVL and has not been studied in Indian population. Objectives: The study aims to analyze the clinical and investigational profile of patients with RM. Materials and Methods: This is an observational prospective analysis of 12 cases of TMVL fulfilling the International Classification of Headache Disorders-2nd edition (ICHD-II criteria of RM examined in Neurology and Ophthalmology Outpatient Department (OPD of Postgraduate Institute of Medical Education and Research (PGIMER, Chandigarh from July 2011 to October 2012. Results: Most patients presented in 3 rd and 4 th decade with equal sex distribution. Seventy-five percent had antecedent migraine without aura (MoA and 25% had migraine with Aura (MA. Headache was ipsilateral to visual symptoms in 67% and bilateral in 33%. TMVL preceded headache onset in 58% and occurred during headache episode in 42%. Visual symptoms were predominantly negative except in one patient who had positive followed by negative symptoms. Duration of visual symptoms was variable ranging from 30 s to 45 min. None of the patient had permanent monocular vision loss. Three patients had episodes of TMVL without headache in addition to the symptom constellation defining RM. Most of the tests done to rule out alternative causes were normal. Magnetic resonance imaging (MRI brain showed nonspecific white matter changes in one patient. Visual-evoked potential (VEP showed prolonged P100 latencies in two cases. Patent foramen ovale was detected in one patient. Conclusions: RM is a definite subtype of migraine and should remain in the ICHD classification. It should be kept as one of the differential diagnosis of transient monocular vision loss. We propose existence of "acephalgic RM" which may respond to migraine prophylaxis.

  17. 3D Printing for Bricks

    OpenAIRE

    ECT Team, Purdue

    2015-01-01

    Building Bytes, by Brian Peters, is a project that uses desktop 3D printers to print bricks for architecture. Instead of using an expensive custom-made printer, it uses a normal standard 3D printer which is available for everyone and makes it more accessible and also easier for fabrication.

  18. Market study: 3-D eyetracker

    Science.gov (United States)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  19. Spherical 3D Isotropic Wavelets

    CERN Document Server

    Lanusse, F; Starck, J -L

    2011-01-01

    Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D Spherical Fourier-Bessel (SFB) analysis in is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. The aim of this paper is to present a new formalism for a spherical 3D isotropic wavelet, i.e. one based on the Fourier-Bessel decomposition of a 3D field and accompany the formalism with a public code to perform wavelet transforms. We describe a new 3D isotropic spherical wavelet decomposition based on the undecimated wavelet transform (UWT) described in Starck et al. 2006. We also present a new fast Discrete Spherical Fourier-Bessel Transform (DSFBT) based on both a discrete Bessel Transform and the HEALPIX angular pixelisation scheme. We test the 3D wavelet transform and as a toy-application, apply a denoising algorithm in wavelet space to the Virgo large...

  20. Improvement of 3D Scanner

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The disadvantage remaining in 3D scanning system and its reasons are discussed. A new host-and-slave structure with high speed image acquisition and processing system is proposed to quicken the image processing and improve the performance of 3D scanning system.

  1. 3-D video techniques in endoscopic surgery.

    Science.gov (United States)

    Becker, H; Melzer, A; Schurr, M O; Buess, G

    1993-02-01

    Three-dimensional visualisation of the operative field is an important requisite for precise and fast handling of open surgical operations. Up to now it has only been possible to display a two-dimensional image on the monitor during endoscopic procedures. The increasing complexity of minimal invasive interventions requires endoscopic suturing and ligatures of larger vessels which are difficult to perform without the impression of space. Three-dimensional vision therefore may decrease the operative risk, accelerate interventions and widen the operative spectrum. In April 1992 a 3-D video system developed at the Nuclear Research Center Karlsruhe, Germany (IAI Institute) was applied in various animal experimental procedures and clinically in laparoscopic cholecystectomy. The system works with a single monitor and active high-speed shutter glasses. Our first trials with this new 3-D imaging system clearly showed a facilitation of complex surgical manoeuvres like mobilisation of organs, preparation in the deep space and suture techniques. The 3-D-system introduced in this article will enter the market in 1993 (Opticon Co., Karlsruhe, Germany.

  2. Advanced 3-D Ultrasound Imaging

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer

    to produce high quality 3-D images. Because of the large matrix transducers with integrated custom electronics, these systems are extremely expensive. The relatively low price of ultrasound scanners is one of the factors for the widespread use of ultrasound imaging. The high price tag on the high quality 3-D......The main purpose of the PhD project was to develop methods that increase the 3-D ultrasound imaging quality available for the medical personnel in the clinic. Acquiring a 3-D volume gives the medical doctor the freedom to investigate the measured anatomy in any slice desirable after the scan has...... been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D imaging. A limiting factor has traditionally been the low image quality achievable using...

  3. Using 3D in Visualization

    DEFF Research Database (Denmark)

    Wood, Jo; Kirschenbauer, Sabine; Döllner, Jürgen

    2005-01-01

    The notion of three-dimensionality is applied to five stages of the visualization pipeline. While 3D visulization is most often associated with the visual mapping and representation of data, this chapter also identifies its role in the management and assembly of data, and in the media used...... to display 3D imagery. The extra cartographic degree of freedom offered by using 3D is explored and offered as a motivation for employing 3D in visualization. The use of VR and the construction of virtual environments exploit navigational and behavioral realism, but become most usefil when combined...... with abstracted representations embedded in a 3D space. The interactions between development of geovisualization, the technology used to implement it and the theory surrounding cartographic representation are explored. The dominance of computing technologies, driven particularly by the gaming industry...

  4. 3D printing in dentistry.

    Science.gov (United States)

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  5. PLOT3D user's manual

    Science.gov (United States)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  6. An Automatic Registration Algorithm for 3D Maxillofacial Model

    Science.gov (United States)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  7. Monocular tool control, eye dominance, and laterality in New Caledonian crows.

    Science.gov (United States)

    Martinho, Antone; Burns, Zackory T; von Bayern, Auguste M P; Kacelnik, Alex

    2014-12-15

    Tool use, though rare, is taxonomically widespread, but morphological adaptations for tool use are virtually unknown. We focus on the New Caledonian crow (NCC, Corvus moneduloides), which displays some of the most innovative tool-related behavior among nonhumans. One of their major food sources is larvae extracted from burrows with sticks held diagonally in the bill, oriented with individual, but not species-wide, laterality. Among possible behavioral and anatomical adaptations for tool use, NCCs possess unusually wide binocular visual fields (up to 60°), suggesting that extreme binocular vision may facilitate tool use. Here, we establish that during natural extractions, tool tips can only be viewed by the contralateral eye. Thus, maintaining binocular view of tool tips is unlikely to have selected for wide binocular fields; the selective factor is more likely to have been to allow each eye to see far enough across the midsagittal line to view the tool's tip monocularly. Consequently, we tested the hypothesis that tool side preference follows eye preference and found that eye dominance does predict tool laterality across individuals. This contrasts with humans' species-wide motor laterality and uncorrelated motor-visual laterality, possibly because bill-held tools are viewed monocularly and move in concert with eyes, whereas hand-held tools are visible to both eyes and allow independent combinations of eye preference and handedness. This difference may affect other models of coordination between vision and mechanical control, not necessarily involving tools. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Virtual Reality, 3D Stereo Visualization, and Applications in Robotics

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2006-01-01

    The use of 3D stereoscopic visualization may provide a user with higher comprehension of remote environments in tele-operation when compared to 2D viewing. Works in the literature have demonstrated how stereo vision contributes to improve perception of some depth cues often for abstract tasks...

  9. An Algorithm for Classification of 3-D Spherical Spatial Points

    Institute of Scientific and Technical Information of China (English)

    ZHU Qing-xin; Mudur SP; LIU Chang; PENG Bo; WU Jia

    2003-01-01

    This paper presents a highly efficient algorithm for classification of 3D points sampled from lots of spheres, using neighboring relations of spatial points to construct a neighbor graph from points cloud. This algorithm can be used in object recognition, computer vision, and CAD model building, etc.

  10. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... digital products could cause damage in children with healthy eyes. The development of normal 3-D vision in children is stimulated as they use their eyes in day-to-day social and natural environments, and this development is largely complete by age ...

  11. 3D Data Acquisition Platform for Human Activity Understanding

    Science.gov (United States)

    2016-03-02

    SECURITY CLASSIFICATION OF: In this project, we incorporated motion capture devices, 3D vision sensors, and EMG sensors to cross validate...Out­standing Self-??Financed Stu­dents Abroad. Student Sheng Li received the 2015 NEU Outstanding Graduate Student Award (Topmost student award in NEU

  12. ADT-3D Tumor Detection Assistant in 3D

    Directory of Open Access Journals (Sweden)

    Jaime Lazcano Bello

    2008-12-01

    Full Text Available The present document describes ADT-3D (Three-Dimensional Tumor Detector Assistant, a prototype application developed to assist doctors diagnose, detect and locate tumors in the brain by using CT scan. The reader may find on this document an introduction to tumor detection; ADT-3D main goals; development details; description of the product; motivation for its development; result’s study; and areas of applicability.

  13. Unassisted 3D camera calibration

    Science.gov (United States)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  14. Bioprinting of 3D hydrogels.

    Science.gov (United States)

    Stanton, M M; Samitier, J; Sánchez, S

    2015-08-07

    Three-dimensional (3D) bioprinting has recently emerged as an extension of 3D material printing, by using biocompatible or cellular components to build structures in an additive, layer-by-layer methodology for encapsulation and culture of cells. These 3D systems allow for cell culture in a suspension for formation of highly organized tissue or controlled spatial orientation of cell environments. The in vitro 3D cellular environments simulate the complexity of an in vivo environment and natural extracellular matrices (ECM). This paper will focus on bioprinting utilizing hydrogels as 3D scaffolds. Hydrogels are advantageous for cell culture as they are highly permeable to cell culture media, nutrients, and waste products generated during metabolic cell processes. They have the ability to be fabricated in customized shapes with various material properties with dimensions at the micron scale. 3D hydrogels are a reliable method for biocompatible 3D printing and have applications in tissue engineering, drug screening, and organ on a chip models.

  15. Multimodal Registration and Fusion for 3D Thermal Imaging

    Directory of Open Access Journals (Sweden)

    Moulay A. Akhloufi

    2015-01-01

    Full Text Available 3D vision is an area of computer vision that has attracted a lot of research interest and has been widely studied. In recent years we witness an increasing interest from the industrial community. This interest is driven by the recent advances in 3D technologies, which enable high precision measurements at an affordable cost. With 3D vision techniques we can conduct advanced manufactured parts inspections and metrology analysis. However, we are not able to detect subsurface defects. This kind of detection is achieved by other techniques, like infrared thermography. In this work, we present a new registration framework for 3D and thermal infrared multimodal fusion. The resulting fused data can be used for advanced 3D inspection in Nondestructive Testing and Evaluation (NDT&E applications. The fusion permits the simultaneous visible surface and subsurface inspections to be conducted in the same process. Experimental tests were conducted with different materials. The obtained results are promising and show how these new techniques can be used efficiently in a combined NDT&E-Metrology analysis of manufactured parts, in areas such as aerospace and automotive.

  16. Tuotekehitysprojekti: 3D-tulostin

    OpenAIRE

    Pihlajamäki, Janne

    2011-01-01

    Opinnäytetyössä tutustuttiin 3D-tulostamisen teknologiaan. Työssä käytiin läpi 3D-tulostimesta tehty tuotekehitysprojekti. Sen lisäksi esiteltiin yleisellä tasolla tuotekehitysprosessi ja syntyneiden tulosten mahdollisia suojausmenetelmiä. Tavoitteena tässä työssä oli kehittää markkinoilta jo löytyvää kotitulostin-tasoista 3D-laiteteknologiaa lähemmäksi ammattilaistason ratkaisua. Tavoitteeseen pyrittiin keskittymällä parantamaan laitteella saavutettavaa tulostustarkkuutta ja -nopeutt...

  17. Color 3D Reverse Engineering

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper presents a principle and a method of col or 3D laser scanning measurement. Based on the fundamental monochrome 3D measureme nt study, color information capture, color texture mapping, coordinate computati on and other techniques are performed to achieve color 3D measurement. The syste m is designed and composed of a line laser light emitter, one color CCD camera, a motor-driven rotary filter, a circuit card and a computer. Two steps in captu ring object's images in the measurement process: Firs...

  18. Exploration of 3D Printing

    OpenAIRE

    Lin, Zeyu

    2014-01-01

    3D printing technology is introduced and defined in this Thesis. Some methods of 3D printing are illustrated and their principles are explained with pictures. Most of the essential parts are presented with pictures and their effects are explained within the whole system. Problems on Up! Plus 3D printer are solved and a DIY product is made with this machine. The processes of making product are recorded and the items which need to be noticed during the process are the highlight in this th...

  19. Handbook of 3D integration

    CERN Document Server

    Garrou , Philip; Ramm , Peter

    2014-01-01

    Edited by key figures in 3D integration and written by top authors from high-tech companies and renowned research institutions, this book covers the intricate details of 3D process technology.As such, the main focus is on silicon via formation, bonding and debonding, thinning, via reveal and backside processing, both from a technological and a materials science perspective. The last part of the book is concerned with assessing and enhancing the reliability of the 3D integrated devices, which is a prerequisite for the large-scale implementation of this emerging technology. Invaluable reading fo

  20. Implementation of vision based 2-DOF underwater Manipulator

    Directory of Open Access Journals (Sweden)

    Geng Jinpeng

    2015-01-01

    Full Text Available Manipulator is of vital importance to the remotely operated vehicle (ROV, especially when it works in the nuclear reactor pool. Two degrees of freedom (2-DOF underwater manipulator is designed to the ROV, which is composed of control cabinet, buoyancy module, propellers, depth gauge, sonar, a monocular camera and other attitude sensors. The manipulator can be used to salvage small parts like bolts and nuts to accelerate the progress of the overhaul. It can move in the vertical direction alone through the control of the second joint, and can grab object using its unique designed gripper. A monocular vision based localization algorithm is applied to help the manipulator work independently and intelligently. Eventually, field experiment is conducted in the swimming pool to verify the effectiveness of the manipulator and the monocular vision based algorithm.

  1. Binocular vision in amblyopia: structure, suppression and plasticity.

    Science.gov (United States)

    Hess, Robert F; Thompson, Benjamin; Baker, Daniel H

    2014-03-01

    The amblyopic visual system was once considered to be structurally monocular. However, it now evident that the capacity for binocular vision is present in many observers with amblyopia. This has led to new techniques for quantifying suppression that have provided insights into the relationship between suppression and the monocular and binocular visual deficits experienced by amblyopes. Furthermore, new treatments are emerging that directly target suppressive interactions within the visual cortex and, on the basis of initial data, appear to improve both binocular and monocular visual function, even in adults with amblyopia. The aim of this review is to provide an overview of recent studies that have investigated the structure, measurement and treatment of binocular vision in observers with strabismic, anisometropic and mixed amblyopia.

  2. Conducting polymer 3D microelectrodes

    DEFF Research Database (Denmark)

    Sasso, Luigi; Vazquez, Patricia; Vedarethinam, Indumathi

    2010-01-01

    Conducting polymer 3D microelectrodes have been fabricated for possible future neurological applications. A combination of micro-fabrication techniques and chemical polymerization methods has been used to create pillar electrodes in polyaniline and polypyrrole. The thin polymer films obtained...

  3. Accepting the T3D

    Energy Technology Data Exchange (ETDEWEB)

    Rich, D.O.; Pope, S.C.; DeLapp, J.G.

    1994-10-01

    In April, a 128 PE Cray T3D was installed at Los Alamos National Laboratory`s Advanced Computing Laboratory as part of the DOE`s High-Performance Parallel Processor Program (H4P). In conjunction with CRI, the authors implemented a 30 day acceptance test. The test was constructed in part to help them understand the strengths and weaknesses of the T3D. In this paper, they briefly describe the H4P and its goals. They discuss the design and implementation of the T3D acceptance test and detail issues that arose during the test. They conclude with a set of system requirements that must be addressed as the T3D system evolves.

  4. 3-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon

    For the last decade, the field of ultrasonic vector flow imaging has gotten an increasingly attention, as the technique offers a variety of new applications for screening and diagnostics of cardiovascular pathologies. The main purpose of this PhD project was therefore to advance the field of 3-D...... ultrasonic vector flow estimation and bring it a step closer to a clinical application. A method for high frame rate 3-D vector flow estimation in a plane using the transverse oscillation method combined with a 1024 channel 2-D matrix array is presented. The proposed method is validated both through phantom......, if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges...

  5. A hand-held 3D laser scanning with global positioning system of subvoxel precision

    Energy Technology Data Exchange (ETDEWEB)

    Arias, Nestor [GOM, Departamento de Fisica y Geologia, Universidad de Pamplona (Colombia); Meneses, Nestor; Meneses, Jaime [GOTS-CENM, Escuela de Fisica, UIS, Bucaramanga (Colombia); Gharbi, Tijani, E-mail: nesariher@unipamplona.edu.co [Departement D' Optique, FEMTO-ST, 16 Route de Gray, 25030 Besancon (France)

    2011-01-01

    In this paper we propose a hand-held 3D laser scanner composed of an optical head device to extract 3D local surface information and a stereo vision system with subvoxel precision to measure the position and orientation of the 3D optical head. The optical head is manually scanned over the surface object by the operator. The orientation and position of the 3D optical head is determined by a phase-sensitive method using a 2D regular intensity pattern. This phase reference pattern is rigidly fixed to the optical head and allows their 3D location with subvoxel precision in the observation field of the stereo vision system. The 3D resolution achieved by the stereo vision system is about 33 microns at 1.8 m with an observation field of 60cm x 60cm.

  6. 3D Face Apperance Model

    DEFF Research Database (Denmark)

    Lading, Brian; Larsen, Rasmus; Astrom, K

    2006-01-01

    We build a 3D face shape model, including inter- and intra-shape variations, derive the analytical Jacobian of its resulting 2D rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations......We build a 3D face shape model, including inter- and intra-shape variations, derive the analytical Jacobian of its resulting 2D rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations...

  7. 3D Face Appearance Model

    DEFF Research Database (Denmark)

    Lading, Brian; Larsen, Rasmus; Åström, Kalle

    2006-01-01

    We build a 3d face shape model, including inter- and intra-shape variations, derive the analytical jacobian of its resulting 2d rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations.}......We build a 3d face shape model, including inter- and intra-shape variations, derive the analytical jacobian of its resulting 2d rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations.}...

  8. Main: TATCCAYMOTIFOSRAMY3D [PLACE

    Lifescience Database Archive (English)

    Full Text Available TATCCAYMOTIFOSRAMY3D S000256 01-August-2006 (last modified) kehi TATCCAY motif foun...d in rice (O.s.) RAmy3D alpha-amylase gene promoter; Y=T/C; a GATA motif as its antisense sequence; TATCCAY ...motif and G motif (see S000130) are responsible for sugar repression (Toyofuku et al. 1998); GATA; amylase; sugar; repression; rice (Oryza sativa) TATCCAY ...

  9. Low Vision

    Science.gov (United States)

    ... HHS USAJobs Home > Statistics and Data > Low Vision Low Vision Low Vision Defined: Low Vision is defined as the ... Ethnicity 2010 U.S. Age-Specific Prevalence Rates for Low Vision by Age, and Race/Ethnicity Table for ...

  10. 3-D Imaging Systems for Agricultural Applications—A Review

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  11. 3-D Imaging Systems for Agricultural Applications-A Review.

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-04-29

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  12. 3-D Imaging Systems for Agricultural Applications—A Review

    Directory of Open Access Journals (Sweden)

    Manuel Vázquez-Arellano

    2016-04-01

    Full Text Available Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  13. Stereo Vision and 3D Reconstruction on a Processor Network

    NARCIS (Netherlands)

    Paar, G.; Kuijpers, N.H.L.; Gasser, C.

    1996-01-01

    Surface measurements during outdòoor construction processes ar very costly whenever the measurement process interferes with the construction activities, since machine and man power resources are idle during the data acquisition procedure. Using frame cameras as sensors to provide a rneasurement data

  14. Why Stereo Vision is Not Always about 3D Reconstruction

    Science.gov (United States)

    1993-07-01

    has been assumed that measuring the dispar- matching features, then using trigonometry to convert ity is trivial, and that solving for the distance...a simple version of a fixation mechanism, in exploredl in the literature. primarily for obstacle avoid- which the trigger feature is foveated and

  15. Photogrammetric 3D reconstruction using mobile imaging

    Science.gov (United States)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  16. MPML3D: Scripting Agents for the 3D Internet.

    Science.gov (United States)

    Prendinger, Helmut; Ullrich, Sebastian; Nakasone, Arturo; Ishizuka, Mitsuru

    2011-05-01

    The aim of this paper is two-fold. First, it describes a scripting language for specifying communicative behavior and interaction of computer-controlled agents ("bots") in the popular three-dimensional (3D) multiuser online world of "Second Life" and the emerging "OpenSimulator" project. While tools for designing avatars and in-world objects in Second Life exist, technology for nonprogrammer content creators of scenarios involving scripted agents is currently missing. Therefore, we have implemented new client software that controls bots based on the Multimodal Presentation Markup Language 3D (MPML3D), a highly expressive XML-based scripting language for controlling the verbal and nonverbal behavior of interacting animated agents. Second, the paper compares Second Life and OpenSimulator platforms and discusses the merits and limitations of each from the perspective of agent control. Here, we also conducted a small study that compares the network performance of both platforms.

  17. 3D-mallinnus ja 3D-animaatiot biovoimalaitoksesta

    OpenAIRE

    Hiltula, Tytti

    2014-01-01

    Opinnäytetyössä tehtiin biovoimalaitoksen piirustuksista 3D-mallinnus ja animaatiot. Työn tarkoituksena oli saada valmiiksi Recwell Oy:lle markkinointiin tarkoitetut kuva- ja videomateriaalit. Työssä perehdyttiin 3D-mallintamisen perustietoihin ja lähtökohtiin sekä animaation laatimiseen. Työ laadittiin kokonaisuudessaan AutoCAD-ohjelmalla, ja työn aikana tutustuttiin huolellisesti myös ohjelman käyttöohjeisiin. Piirustusten mitoituksessa huomattiin jo alkuvaiheessa suuria puutteita, ...

  18. From 3D view to 3D print

    Science.gov (United States)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  19. Materialedreven 3d digital formgivning

    DEFF Research Database (Denmark)

    Hansen, Flemming Tvede

    2010-01-01

    Formålet med forskningsprojektet er for det første at understøtte keramikeren i at arbejde eksperimenterende med digital formgivning, og for det andet at bidrage til en tværfaglig diskurs om brugen af digital formgivning. Forskningsprojektet fokuserer på 3d formgivning og derved på 3d digital...... formgivning og Rapid Prototyping (RP). RP er en fællesbetegnelse for en række af de teknikker, der muliggør at overføre den digitale form til 3d fysisk form. Forskningsprojektet koncentrerer sig om to overordnede forskningsspørgsmål. Det første handler om, hvordan viden og erfaring indenfor det keramiske...... fagområde kan blive udnyttet i forhold til 3d digital formgivning. Det andet handler om, hvad en sådan tilgang kan bidrage med, og hvordan den kan blive udnyttet i et dynamisk samspil med det keramiske materiale i formgivningen af 3d keramiske artefakter. Materialedreven formgivning er karakteriseret af en...

  20. Novel 3D media technologies

    CERN Document Server

    Dagiuklas, Tasos

    2015-01-01

    This book describes recent innovations in 3D media and technologies, with coverage of 3D media capturing, processing, encoding, and adaptation, networking aspects for 3D Media, and quality of user experience (QoE). The contributions are based on the results of the FP7 European Project ROMEO, which focuses on new methods for the compression and delivery of 3D multi-view video and spatial audio, as well as the optimization of networking and compression jointly across the future Internet. The delivery of 3D media to individual users remains a highly challenging problem due to the large amount of data involved, diverse network characteristics and user terminal requirements, as well as the user’s context such as their preferences and location. As the number of visual views increases, current systems will struggle to meet the demanding requirements in terms of delivery of consistent video quality to fixed and mobile users. ROMEO will present hybrid networking solutions that combine the DVB-T2 and DVB-NGH broadcas...

  1. 3D future internet media

    CERN Document Server

    Dagiuklas, Tasos

    2014-01-01

    This book describes recent innovations in 3D media and technologies, with coverage of 3D media capturing, processing, encoding, and adaptation, networking aspects for 3D Media, and quality of user experience (QoE). The main contributions are based on the results of the FP7 European Projects ROMEO, which focus on new methods for the compression and delivery of 3D multi-view video and spatial audio, as well as the optimization of networking and compression jointly across the Future Internet (www.ict-romeo.eu). The delivery of 3D media to individual users remains a highly challenging problem due to the large amount of data involved, diverse network characteristics and user terminal requirements, as well as the user’s context such as their preferences and location. As the number of visual views increases, current systems will struggle to meet the demanding requirements in terms of delivery of constant video quality to both fixed and mobile users. ROMEO will design and develop hybrid-networking solutions that co...

  2. Speaking Volumes About 3-D

    Science.gov (United States)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  3. Real-Time 3D Visualization

    Science.gov (United States)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  4. Automated rose cutting in greenhouses with 3D vision and robotics : analysis of 3D vision techniques for stem detection

    NARCIS (Netherlands)

    Noordam, J.C.; Hemming, J.; Heerde, van C.J.E.; Golbach, F.B.T.F.; Soest, van R.; Wekking, E.

    2005-01-01

    The reduction of labour cost is the major motivation to develop a system for robot harvesting of roses in greenhouses that at least can compete with manual harvesting. Due to overlapping leaves, one of the most complicated tasks in robotic rose cutting is to locate the stem and trace the stem down t

  5. Automated rose cutting in greenhouses with 3D vision and robotics : analysis of 3D vision techniques for stem detection

    NARCIS (Netherlands)

    Noordam, J.C.; Hemming, J.; Heerde, van C.J.E.; Golbach, F.B.T.F.; Soest, van R.; Wekking, E.

    2005-01-01

    The reduction of labour cost is the major motivation to develop a system for robot harvesting of roses in greenhouses that at least can compete with manual harvesting. Due to overlapping leaves, one of the most complicated tasks in robotic rose cutting is to locate the stem and trace the stem down

  6. Modification of 3D milling machine to 3D printer

    OpenAIRE

    Halamíček, Lukáš

    2015-01-01

    Tato práce se zabývá přestavbou gravírovací frézky na 3D tiskárnu. V první části se práce zabývá možnými technologiemi 3D tisku a možností jejich využití u přestavby. Dále jsou popsány a vybrány vhodné součásti pro přestavbu. V další části je realizováno řízení ohřevu podložky, trysky a řízení posuvu drátu pomocí softwaru TwinCat od společnosti Beckhoff na průmyslovém počítači. Výsledkem práce by měla být oživená 3D tiskárna. This thesis deals with rebuilding of engraving machine to 3D pri...

  7. Aspects of defects in 3d-3d correspondence

    Energy Technology Data Exchange (ETDEWEB)

    Gang, Dongmin [Kavli Institute for the Physics and Mathematics of the Universe (WPI), University of Tokyo,Chiba 277-8583 (Japan); Kim, Nakwoo [Department of Physics and Research Institute of Basic Science, Kyung Hee University,Seoul 02447 (Korea, Republic of); School of Physics, Korea Institute for Advanced Study,Seoul 02455 (Korea, Republic of); Romo, Mauricio; Yamazaki, Masahito [Kavli Institute for the Physics and Mathematics of the Universe (WPI), University of Tokyo,Chiba 277-8583 (Japan); School of Natural Sciences, Institute for Advanced Study,Princeton, NJ 08540 (United States)

    2016-10-12

    In this paper we study supersymmetric co-dimension 2 and 4 defects in the compactification of the 6d (2,0) theory of type A{sub N−1} on a 3-manifold M. The so-called 3d-3d correspondence is a relation between complexified Chern-Simons theory (with gauge group SL(N,ℂ)) on M and a 3d N=2 theory T{sub N}[M]. We study this correspondence in the presence of supersymmetric defects, which are knots/links inside the 3-manifold. Our study employs a number of different methods: state-integral models for complex Chern-Simons theory, cluster algebra techniques, domain wall theory T[SU(N)], 5d N=2 SYM, and also supergravity analysis through holography. These methods are complementary and we find agreement between them. In some cases the results lead to highly non-trivial predictions on the partition function. Our discussion includes a general expression for the cluster partition function, which can be used to compute in the presence of maximal and certain class of non-maximal punctures when N>2. We also highlight the non-Abelian description of the 3d N=2T{sub N}[M] theory with defect included, when such a description is available. This paper is a companion to our shorter paper http://dx.doi.org/10.1088/1751-8113/49/30/30LT02, which summarizes our main results.

  8. Aspects of defects in 3d-3d correspondence

    Science.gov (United States)

    Gang, Dongmin; Kim, Nakwoo; Romo, Mauricio; Yamazaki, Masahito

    2016-10-01

    In this paper we study supersymmetric co-dimension 2 and 4 defects in the compactification of the 6d (2, 0) theory of type A N -1 on a 3-manifold M . The so-called 3d-3d correspondence is a relation between complexified Chern-Simons theory (with gauge group SL(N,C) ) on M and a 3d N=2 theory T N [ M ]. We study this correspondence in the presence of supersymmetric defects, which are knots/links inside the 3-manifold. Our study employs a number of different methods: state-integral models for complex Chern-Simons theory, cluster algebra techniques, domain wall theory T [SU( N )], 5d N=2 SYM, and also supergravity analysis through holography. These methods are complementary and we find agreement between them. In some cases the results lead to highly non-trivial predictions on the partition function. Our discussion includes a general expression for the cluster partition function, which can be used to compute in the presence of maximal and certain class of non-maximal punctures when N > 2. We also highlight the non-Abelian description of the 3d N=2 T N [ M ] theory with defect included, when such a description is available. This paper is a companion to our shorter paper [1], which summarizes our main results.

  9. Holography of 3d-3d correspondence at large N

    Energy Technology Data Exchange (ETDEWEB)

    Gang, Dongmin [School of Physics, Korea Institute for Advanced Study,85 Hoegiro, Dongdaemun-gu, Seoul, 130-722 (Korea, Republic of); Kim, Nakwoo [Department of Physics and Research Institute of Basic Science, Kyung Hee University,26 Kyungheedaero, Dongdaemun-gu, Seoul, 130-701 (Korea, Republic of); Lee, Sangmin [School of Physics, Korea Institute for Advanced Study,85 Hoegiro, Dongdaemun-gu, Seoul, 130-722 (Korea, Republic of); Center for Theoretical Physics, Department of Physics and Astronomy, College of Liberal Studies,Seoul National University, 1 Gwanakro, Gwanak-gu, Seoul, 151-742 (Korea, Republic of)

    2015-04-20

    We study the physics of multiple M5-branes compactified on a hyperbolic 3-manifold. On the one hand, it leads to the 3d-3d correspondence which maps an N=2 superconformal field theory to a pure Chern-Simons theory on the 3-manifold. On the other hand, it leads to a warped AdS{sub 4} geometry in M-theory holographically dual to the superconformal field theory. Combining the holographic duality and the 3d-3d correspondence, we propose a conjecture for the large N limit of the perturbative free energy of a Chern-Simons theory on hyperbolic 3-manifold. The conjecture claims that the tree, one-loop and two-loop terms all share the same N{sup 3} scaling behavior and are proportional to the volume of the 3-manifold, while the three-loop and higher terms are suppressed at large N. Under mild assumptions, we prove the tree and one-loop parts of the conjecture. For the two-loop part, we test the conjecture numerically in a number of examples and find precise agreement. We also confirm the suppression of higher loop terms in a few examples.

  10. Anvendt 3D modellering og parametrisk formgivning

    DEFF Research Database (Denmark)

    Hermund, Anders

    2011-01-01

    hjælpe med at identificere problemer og fordele, og fokusere på vigtigheden af at være i stand til at påvirke udviklingen af moderne 3D teknologier og systemer i en plausibel retning for kvaliteten af fremtidens arkitektoniske projekter. Forskningsspørgsmål er: Hvorledes kan en diagrammatisk metode sikre...... kreativitet i det parametriske system? Denne Ph.d. afhandling søger at skabe en teoretisk ramme, med henblik på at identificere og klarlægge nye potentialer for anvendt 3D modellering og parametrisk formgivningspraksis. Efter at have fået denne klarhed, er det nødvendigt at drøfte anvendelse og etik i de nye...... kommunikationsmidler og gennem interviews og praksis-baseret forskning etablere et brugbart fundament ud fra disse erfaringer. Den digitale udvikling skal ses som en helhed, der tager del i samspillet mellem både en historisk tradition og en langsigtet vision. Et værktøj, og en metode, der med mulighederne...

  11. Insect stereopsis demonstrated using a 3D insect cinema.

    Science.gov (United States)

    Nityananda, Vivek; Tarawneh, Ghaith; Rosner, Ronny; Nicolas, Judith; Crichton, Stuart; Read, Jenny

    2016-01-07

    Stereopsis - 3D vision - has become widely used as a model of perception. However, all our knowledge of possible underlying mechanisms comes almost exclusively from vertebrates. While stereopsis has been demonstrated for one invertebrate, the praying mantis, a lack of techniques to probe invertebrate stereopsis has prevented any further progress for three decades. We therefore developed a stereoscopic display system for insects, using miniature 3D glasses to present separate images to each eye, and tested our ability to deliver stereoscopic illusions to praying mantises. We find that while filtering by circular polarization failed due to excessive crosstalk, "anaglyph" filtering by spectral content clearly succeeded in giving the mantis the illusion of 3D depth. We thus definitively demonstrate stereopsis in mantises and also demonstrate that the anaglyph technique can be effectively used to deliver virtual 3D stimuli to insects. This method opens up broad avenues of research into the parallel evolution of stereoscopic computations and possible new algorithms for depth perception.

  12. 3D vector flow imaging

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes

    The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... conventional methods can estimate only the axial component. Several approaches for 3D vector velocity estimation have been suggested, but none of these methods have so far produced convincing in vivo results nor have they been adopted by commercial manufacturers. The basis for this project is the Transverse...... on the TO fields are suggested. They can be used to optimize the TO method. In the third part, a TO method for 3D vector velocity estimation is proposed. It employs a 2D phased array transducer and decouples the velocity estimation into three velocity components, which are estimated simultaneously based on 5...

  13. Markerless 3D Face Tracking

    DEFF Research Database (Denmark)

    Walder, Christian; Breidt, Martin; Bulthoff, Heinrich

    2009-01-01

    We present a novel algorithm for the markerless tracking of deforming surfaces such as faces. We acquire a sequence of 3D scans along with color images at 40Hz. The data is then represented by implicit surface and color functions, using a novel partition-of-unity type method of efficiently...... combining local regressors using nearest neighbor searches. Both these functions act on the 4D space of 3D plus time, and use temporal information to handle the noise in individual scans. After interactive registration of a template mesh to the first frame, it is then automatically deformed to track...... the scanned surface, using the variation of both shape and color as features in a dynamic energy minimization problem. Our prototype system yields high-quality animated 3D models in correspondence, at a rate of approximately twenty seconds per timestep. Tracking results for faces and other objects...

  14. 3-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon

    studies and in vivo. Phantom measurements are compared with their corresponding reference value, whereas the in vivo measurement is validated against the current golden standard for non-invasive blood velocity estimates, based on magnetic resonance imaging (MRI). The study concludes, that a high precision......, if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges......For the last decade, the field of ultrasonic vector flow imaging has gotten an increasingly attention, as the technique offers a variety of new applications for screening and diagnostics of cardiovascular pathologies. The main purpose of this PhD project was therefore to advance the field of 3-D...

  15. Microfluidic 3D Helix Mixers

    Directory of Open Access Journals (Sweden)

    Georgette B. Salieb-Beugelaar

    2016-10-01

    Full Text Available Polymeric microfluidic systems are well suited for miniaturized devices with complex functionality, and rapid prototyping methods for 3D microfluidic structures are increasingly used. Mixing at the microscale and performing chemical reactions at the microscale are important applications of such systems and we therefore explored feasibility, mixing characteristics and the ability to control a chemical reaction in helical 3D channels produced by the emerging thread template method. Mixing at the microscale is challenging because channel size reduction for improving solute diffusion comes at the price of a reduced Reynolds number that induces a strictly laminar flow regime and abolishes turbulence that would be desired for improved mixing. Microfluidic 3D helix mixers were rapidly prototyped in polydimethylsiloxane (PDMS using low-surface energy polymeric threads, twisted to form 2-channel and 3-channel helices. Structure and flow characteristics were assessed experimentally by microscopy, hydraulic measurements and chromogenic reaction, and were modeled by computational fluid dynamics. We found that helical 3D microfluidic systems produced by thread templating allow rapid prototyping, can be used for mixing and for controlled chemical reaction with two or three reaction partners at the microscale. Compared to the conventional T-shaped microfluidic system used as a control device, enhanced mixing and faster chemical reaction was found to occur due to the combination of diffusive mixing in small channels and flow folding due to the 3D helix shape. Thus, microfluidic 3D helix mixers can be rapidly prototyped using the thread template method and are an attractive and competitive method for fluid mixing and chemical reactions at the microscale.

  16. 3D Printed Bionic Nanodevices.

    Science.gov (United States)

    Kong, Yong Lin; Gupta, Maneesh K; Johnson, Blake N; McAlpine, Michael C

    2016-06-01

    The ability to three-dimensionally interweave biological and functional materials could enable the creation of bionic devices possessing unique and compelling geometries, properties, and functionalities. Indeed, interfacing high performance active devices with biology could impact a variety of fields, including regenerative bioelectronic medicines, smart prosthetics, medical robotics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins, to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature sensitive. This renders most biological platforms incompatible with the fabrication and materials processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid and brittle. A number of strategies have been developed to overcome these dichotomies. One particularly novel approach is the use of extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers a freeform fabrication strategy. This approach addresses the dichotomies presented above by (1) using 3D printing and imaging for customized, hierarchical, and interwoven device architectures; (2) employing nanotechnology as an enabling route for introducing high performance materials, with the potential for exhibiting properties not found in the bulk; and (3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. Further, 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, novel nanomaterial properties, and 'living' platforms may enable next-generation bionic systems. In this review, we highlight this synergistic integration of the unique properties of nanomaterials with the

  17. 3D Printed Bionic Nanodevices

    Science.gov (United States)

    Kong, Yong Lin; Gupta, Maneesh K.; Johnson, Blake N.; McAlpine, Michael C.

    2016-01-01

    Summary The ability to three-dimensionally interweave biological and functional materials could enable the creation of bionic devices possessing unique and compelling geometries, properties, and functionalities. Indeed, interfacing high performance active devices with biology could impact a variety of fields, including regenerative bioelectronic medicines, smart prosthetics, medical robotics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins, to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature sensitive. This renders most biological platforms incompatible with the fabrication and materials processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid and brittle. A number of strategies have been developed to overcome these dichotomies. One particularly novel approach is the use of extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers a freeform fabrication strategy. This approach addresses the dichotomies presented above by (1) using 3D printing and imaging for customized, hierarchical, and interwoven device architectures; (2) employing nanotechnology as an enabling route for introducing high performance materials, with the potential for exhibiting properties not found in the bulk; and (3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. Further, 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, novel nanomaterial properties, and ‘living’ platforms may enable next-generation bionic systems. In this review, we highlight this synergistic integration of the unique properties of nanomaterials with

  18. Novel applications of hyperstereo vision

    Science.gov (United States)

    Watkins, Wendell R.; Jordan, Jay B.; Trivedi, Mohan M.

    1997-09-01

    Recent stereo vision experiments show potential in enhancing vehicular navigation, target acquisition, and optical turbulence mitigation. The experiments involved the use of stereo vision headsets connected to visible and 8-12 micrometers IR imagers. The imagers were separated by up to 50 m and equipped with telescopes for viewing at ranges of tens of meters up to 4 km. The important findings were: (1) human viewers were able to discern terrain undulations for obstacle avoidance, (2) human viewers were able to detect depth features within the scenes that enhanced the target acquisition process over using monocular viewing,and (3) human viewers noted appreciable reduction in the distortion effects of optical turbulence over that observed through a single monocular channel. For navigation, stereo goggles were developed for headset display and simultaneous direct vision for vehicular navigation enhancement. For detection, the depth cues can be used to detect even salient target features. For optical turbulence, the human mechanisms of fusing two views into a single perceived scene can be used to provide nearly undistorted perception. These experiments show significant improvement for many applications.

  19. Effect of monocular deprivation on rabbit neural retinal cell densities

    Directory of Open Access Journals (Sweden)

    Philip Maseghe Mwachaka

    2015-01-01

    Conclusion: In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye.

  20. Ideal 3D asymmetric concentrator

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Botella, Angel [Departamento Fisica Aplicada a los Recursos Naturales, Universidad Politecnica de Madrid, E.T.S.I. de Montes, Ciudad Universitaria s/n, 28040 Madrid (Spain); Fernandez-Balbuena, Antonio Alvarez; Vazquez, Daniel; Bernabeu, Eusebio [Departamento de Optica, Universidad Complutense de Madrid, Fac. CC. Fisicas, Ciudad Universitaria s/n, 28040 Madrid (Spain)

    2009-01-15

    Nonimaging optics is a field devoted to the design of optical components for applications such as solar concentration or illumination. In this field, many different techniques have been used for producing reflective and refractive optical devices, including reverse engineering techniques. In this paper we apply photometric field theory and elliptic ray bundles method to study 3D asymmetric - without rotational or translational symmetry - concentrators, which can be useful components for nontracking solar applications. We study the one-sheet hyperbolic concentrator and we demonstrate its behaviour as ideal 3D asymmetric concentrator. (author)

  1. 3D digitization of mosaics

    Directory of Open Access Journals (Sweden)

    Anna Maria Manferdini

    2012-11-01

    Full Text Available In this paper we present a methodology developed to access to Cultural Heritage information using digital 3d reality-based models as graphic interfaces. The case studies presented belong to the wide repertoire of mosaics of Ravenna. One of the most peculiar characteristics of mosaics that often limits their digital survey is their multi-scale complexity; nevertheless their models could be used in 3d information systems, for digital exhibitions, for reconstruction aims and to document their conservation conditions in order to conduct restoration interventions in digital environments aiming at speeding and performing more reliable evaluations.

  2. Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

    Science.gov (United States)

    Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian

    2016-10-01

    In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.

  3. Motion Detection in the Far Peripheral Vision Field

    Science.gov (United States)

    2007-12-01

    responsible for the human ability to appreciate spatial detail (visual acuity), discriminate color, stereopsis , and other fine discrimina- tions, and...Visual-vestibular interactions: I. Influence of peripheral vision on suppression of the vestibulo-ocular reflex and visual acuity. Aviat Space...2403–2411. 45, Kochhar, D. S.; Fraser, T. M. Monocular peripheral vision as a factor in flight safety aviation . Space, and Environmental Medicine

  4. PubChem3D: Biologically relevant 3-D similarity

    Directory of Open Access Journals (Sweden)

    Kim Sunghwan

    2011-07-01

    Full Text Available Abstract Background The use of 3-D similarity techniques in the analysis of biological data and virtual screening is pervasive, but what is a biologically meaningful 3-D similarity value? Can one find statistically significant separation between "active/active" and "active/inactive" spaces? These questions are explored using 734,486 biologically tested chemical structures, 1,389 biological assay data sets, and six different 3-D similarity types utilized by PubChem analysis tools. Results The similarity value distributions of 269.7 billion unique conformer pairs from 734,486 biologically tested compounds (all-against-all from PubChem were utilized to help work towards an answer to the question: what is a biologically meaningful 3-D similarity score? The average and standard deviation for the six similarity measures STST-opt, CTST-opt, ComboTST-opt, STCT-opt, CTCT-opt, and ComboTCT-opt were 0.54 ± 0.10, 0.07 ± 0.05, 0.62 ± 0.13, 0.41 ± 0.11, 0.18 ± 0.06, and 0.59 ± 0.14, respectively. Considering that this random distribution of biologically tested compounds was constructed using a single theoretical conformer per compound (the "default" conformer provided by PubChem, further study may be necessary using multiple diverse conformers per compound; however, given the breadth of the compound set, the single conformer per compound results may still apply to the case of multi-conformer per compound 3-D similarity value distributions. As such, this work is a critical step, covering a very wide corpus of chemical structures and biological assays, creating a statistical framework to build upon. The second part of this study explored the question of whether it was possible to realize a statistically meaningful 3-D similarity value separation between reputed biological assay "inactives" and "actives". Using the terminology of noninactive-noninactive (NN pairs and the noninactive-inactive (NI pairs to represent comparison of the "active/active" and

  5. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    Science.gov (United States)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  6. Localization of monocular stimuli in different depth planes.

    Science.gov (United States)

    Shimono, Koichi; Tam, Wa James; Asakura, Nobuhiko; Ohmi, Masao

    2005-09-01

    We examined the phenomenon in which two physically aligned monocular stimuli appear to be non-collinear when each of them is located in binocular regions that are at different depth planes. Using monocular bars embedded in binocular random-dot areas that are at different depths, we manipulated properties of the binocular areas and examined their effect on the perceived direction and depth of the monocular stimuli. Results showed that (1) the relative visual direction and perceived depth of the monocular bars depended on the binocular disparity and the dot density of the binocular areas, and (2) the visual direction, but not the depth, depended on the width of the binocular regions. These results are consistent with the hypothesis that monocular stimuli are treated by the visual system as binocular stimuli that have acquired the properties of their binocular surrounds. Moreover, partial correlation analysis suggests that the visual system utilizes both the disparity information of the binocular areas and the perceived depth of the monocular bars in determining the relative visual direction of the bars.

  7. A New Feature Points Reconstruction Method in Spacecraft Vision Navigation

    Directory of Open Access Journals (Sweden)

    Bing Hua

    2015-01-01

    Full Text Available The important applications of monocular vision navigation in aerospace are spacecraft ground calibration tests and spacecraft relative navigation. Regardless of the attitude calibration for ground turntable or the relative navigation between two spacecraft, it usually requires four noncollinear feature points to achieve attitude estimation. In this paper, a vision navigation system based on the least feature points is designed to deal with fault or unidentifiable feature points. An iterative algorithm based on the feature point reconstruction is proposed for the system. Simulation results show that the attitude calculation of the designed vision navigation system could converge quickly, which improves the robustness of the vision navigation of spacecraft.

  8. Viewing galaxies in 3D

    CERN Document Server

    Krajnović, Davor

    2016-01-01

    Thanks to a technique that reveals galaxies in 3D, astronomers can now show that many galaxies have been wrongly classified. Davor Krajnovi\\'c argues that the classification scheme proposed 85 years ago by Edwin Hubble now needs to be revised.

  9. 3D terahertz beam profiling

    DEFF Research Database (Denmark)

    Pedersen, Pernille Klarskov; Strikwerda, Andrew; Wang, Tianwu

    2013-01-01

    We present a characterization of THz beams generated in both a two-color air plasma and in a LiNbO3 crystal. Using a commercial THz camera, we record intensity images as a function of distance through the beam waist, from which we extract 2D beam profiles and visualize our measurements into 3D beam...

  10. 3D Printing: Exploring Capabilities

    Science.gov (United States)

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  11. When Art Meets 3D

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The presentation of the vanguard work,My Dream3D,the innovative production by the China Disabled People’s Performing Art Troupe(CDPPAT),directed by Joy Joosang Park,provided the film’s domestic premiere at Beijing’s Olympic Park onApril7.The show provided an intriguing insight not

  12. 3D Printing of Metals

    Directory of Open Access Journals (Sweden)

    Manoj Gupta

    2017-09-01

    Full Text Available The potential benefits that could be derived if the science and technology of 3D printing were to be established have been the crux behind monumental efforts by governments, in most countries, that invest billions of dollars to develop this manufacturing technology.[...

  13. Making Inexpensive 3-D Models

    Science.gov (United States)

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  14. 3D Printing: Exploring Capabilities

    Science.gov (United States)

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  15. Building 3D scenes from 2D image sequences

    Science.gov (United States)

    Cristea, Paul D.

    2006-05-01

    Sequences of 2D images, taken by a single moving video receptor, can be fused to generate a 3D representation. This dynamic stereopsis exists in birds and reptiles, whereas the static binocular stereopsis is common in mammals, including humans. Most multimedia computer vision systems for stereo image capture, transmission, processing, storage and retrieval are based on the concept of binocularity. As a consequence, their main goal is to acquire, conserve and enhance pairs of 2D images able to generate a 3D visual perception in a human observer. Stereo vision in birds is based on the fusion of images captured by each eye, with previously acquired and memorized images from the same eye. The process goes on simultaneously and conjointly for both eyes and generates an almost complete all-around visual field. As a consequence, the baseline distance is no longer fixed, as in the case of binocular 3D view, but adjustable in accordance with the distance to the object of main interest, allowing a controllable depth effect. Moreover, the synthesized 3D scene can have a better resolution than each individual 2D image in the sequence. Compression of 3D scenes can be achieved, and stereo transmissions with lower bandwidth requirements can be developed.

  16. Priprava 3D modelov za 3D tisk

    OpenAIRE

    2015-01-01

    Po mnenju nekaterih strokovnjakov bo aditivna proizvodnja (ali 3D tiskanje) spremenila proizvodnjo industrijo, saj si bo vsak posameznik lahko natisnil svoj objekt po želji. V diplomski nalogi so predstavljene nekatere tehnologije aditivne proizvodnje. V nadaljevanju diplomske naloge je predstavljena izdelava makete hiše v merilu 1:100, vse od modeliranja do tiskanja. Poseben poudarek je posvečen predelavi modela, da je primeren za tiskanje, kjer je razvit pristop za hitrejše i...

  17. Post processing of 3D models for 3D printing

    OpenAIRE

    2015-01-01

    According to the opinion of some experts the additive manufacturing or 3D printing will change manufacturing industry, because any individual could print their own model according to his or her wishes. In this graduation thesis some of the additive manufacturing technologies are presented. Furthermore in the production of house scale model in 1:100 is presented, starting from modeling to printing. Special attention is given to postprocessing of the building model elements us...

  18. A Behaviour-Based Architecture for Mapless Navigation Using Vision

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Guzel

    2012-04-01

    Full Text Available Autonomous robots operating in an unknown and uncertain environment must be able to cope with dynamic changes to that environment. For a mobile robot in a cluttered environment to navigate successfully to a goal while avoiding obstacles is a challenging problem. This paper presents a new behaviour-based architecture design for mapless navigation. The architecture is composed of several modules and each module generates behaviours. A novel method, inspired from a visual homing strategy, is adapted to a monocular vision-based system to overcome goal-based navigation problems. A neural network-based obstacle avoidance strategy is designed using a 2-D scanning laser. To evaluate the performance of the proposed architecture, the system has been tested using Microsoft Robotics Studio (MRS, which is a very powerful 3D simulation environment. In addition, real experiments to guide a Pioneer 3-DX mobile robot, equipped with a pan-tilt-zoom camera in a cluttered environment are presented. The analysis of the results allows us to validate the proposed behaviour-based navigation strategy.

  19. Object Recognition Using a 3D RFID System

    OpenAIRE

    Roh, Se-gon; Choi, Hyouk Ryeol

    2009-01-01

    Up to now, object recognition in robotics has been typically done by vision, ultrasonic sensors, laser ranger finders etc. Recently, RFID has emerged as a promising technology that can strengthen object recognition. In this chapter, the 3D RFID system and the 3D tag were presented. The proposed RFID system can determine if an object as well as other tags exists, and also can estimate the orientation and position of the object. This feature considerably reduces the dependence of the robot on o...

  20. Detection of 3D curved trajectories: The role of binocular disparity

    Directory of Open Access Journals (Sweden)

    Russell Stewart Pierce

    2013-02-01

    Full Text Available We examined the ability of observers to detect the 3D curvature of motion paths when binocular disparity and motion information were present. On each trial, two displays were observed through shutter-glasses. In one display, a sphere moved along a linear path in the horizontal and depth dimensions. In the other display, the sphere moved from the same starting position to the same ending position as in the linear path, but moved along an arc in depth. Observers were asked to indicate whether the first or second display simulated a curved trajectory. Adaptive staircases were used to derive the observers’ thresholds of curvature detection. In the first experiment, two independent variables were manipulated: viewing condition (binocular vs. monocular and type of curvature (concave vs. convex. In the second experiment, three independent variables were manipulated: viewing condition, type of curvature, and whether the motion direction was approaching or receding. In both experiments, detection thresholds were lower for binocular viewing conditions as compared to monocular viewing conditions. In addition, concave trajectories were easier to detect than convex trajectories. In the second experiment, the direction of motion did not significantly affect curvature detection. These results indicate the detection of curved motion paths from monocular information was improved when binocular information was present. The results also indicate the importance of the type of curvature, suggesting that the rate of change of disparity may be important in detecting curved trajectories.

  1. Improvements in clinical and functional vision and perceived visual disability after first and second eye cataract surgery

    OpenAIRE

    Elliott, D; Patla, A.; Bullimore, M.

    1997-01-01

    AIMS—To determine the improvements in clinical and functional vision and perceived visual disability after first and second eye cataract surgery.
METHODS—Clinical vision (monocular and binocular high and low contrast visual acuity, contrast sensitivity, and disability glare), functional vision (face identity and expression recognition, reading speed, word acuity, and mobility orientation), and perceived visual disability (Activities of Daily Vision Scale) were measured in 25 subjects before a...

  2. A trajectory and orientation reconstruction method for moving objects based on a moving monocular camera.

    Science.gov (United States)

    Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian

    2015-03-09

    We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional "point intersection" to "trajectory intersection" in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable.

  3. Perception of scene-relative object movement: Optic flow parsing and the contribution of monocular depth cues.

    Science.gov (United States)

    Warren, Paul A; Rushton, Simon K

    2009-05-01

    We have recently suggested that the brain uses its sensitivity to optic flow in order to parse retinal motion into components arising due to self and object movement (e.g. Rushton, S. K., & Warren, P. A. (2005). Moving observers, 3D relative motion and the detection of object movement. Current Biology, 15, R542-R543). Here, we explore whether stereo disparity is necessary for flow parsing or whether other sources of depth information, which could theoretically constrain flow-field interpretation, are sufficient. Stationary observers viewed large field of view stimuli containing textured cubes, moving in a manner that was consistent with a complex observer movement through a stationary scene. Observers made speeded responses to report the perceived direction of movement of a probe object presented at different depths in the scene. Across conditions we varied the presence or absence of different binocular and monocular cues to depth order. In line with previous studies, results consistent with flow parsing (in terms of both perceived direction and response time) were found in the condition in which motion parallax and stereoscopic disparity were present. Observers were poorer at judging object movement when depth order was specified by parallax alone. However, as more monocular depth cues were added to the stimulus the results approached those found when the scene contained stereoscopic cues. We conclude that both monocular and binocular static depth information contribute to flow parsing. These findings are discussed in the context of potential architectures for a model of the flow parsing mechanism.

  4. Integration of monocular motion signals and the analysis of interocular velocity differences for the perception of motion-in-depth.

    Science.gov (United States)

    Shioiri, Satoshi; Kakehi, Daisuke; Tashiro, Tomoyoshi; Yaguchi, Hirohisa

    2009-12-09

    We investigated how the mechanism for perceiving motion-in-depth based on interocular velocity differences (IOVDs) integrates signals from the motion spatial frequency (SF) channels. We focused on the question whether this integration is implemented before or after the comparison of the velocity signals from the two eyes. We measured spatial frequency selectivity of the MAE of motion in depth (3D MAE). The 3D MAE showed little spatial frequency selectivity, whereas the 2D lateral MAE showed clear spatial frequency selectivity in the same condition. This indicates that the outputs of the monocular motion SF channels are combined before analyzing the IOVD. The presumption was confirmed by the disappearance of the 3D MAE after exposure to superimposed gratings with different spatial frequencies moving in opposite directions. The direction of the 2D MAE depended on the test spatial frequency in the same condition. These results suggest that the IOVD is calculated at a relatively later stage of the motion analysis, and that some monocular information is preserved even after the integration of the motion SF channel outputs.

  5. 3D Printable Graphene Composite.

    Science.gov (United States)

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-08

    In human being's history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today's personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite's linear thermal coefficient is below 75 ppm·°C(-1) from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  6. Forensic 3D Scene Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  7. 3D Printed Robotic Hand

    Science.gov (United States)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  8. Medical 3D thermography system

    OpenAIRE

    GRUBIŠIĆ, IVAN

    2011-01-01

    Infrared (IR) thermography determines the surface temperature of an object or human body using thermal IR measurement camera. It is an imaging technology which is contactless and completely non-invasive. These propertiesmake IR thermography a useful method of analysis that is used in various industrial applications to detect, monitor and predict irregularities in many fields from engineering to medical and biological observations. This paper presents a conceptual model of Medical 3D Thermo...

  9. 3-D MAPPING TECHNOLOGIES FOR HIGH LEVEL WASTE TANKS

    Energy Technology Data Exchange (ETDEWEB)

    Marzolf, A.; Folsom, M.

    2010-08-31

    This research investigated four techniques that could be applicable for mapping of solids remaining in radioactive waste tanks at the Savannah River Site: stereo vision, LIDAR, flash LIDAR, and Structure from Motion (SfM). Stereo vision is the least appropriate technique for the solids mapping application. Although the equipment cost is low and repackaging would be fairly simple, the algorithms to create a 3D image from stereo vision would require significant further development and may not even be applicable since stereo vision works by finding disparity in feature point locations from the images taken by the cameras. When minimal variation in visual texture exists for an area of interest, it becomes difficult for the software to detect correspondences for that object. SfM appears to be appropriate for solids mapping in waste tanks. However, equipment development would be required for positioning and movement of the camera in the tank space to enable capturing a sequence of images of the scene. Since SfM requires the identification of distinctive features and associates those features to their corresponding instantiations in the other image frames, mockup testing would be required to determine the applicability of SfM technology for mapping of waste in tanks. There may be too few features to track between image frame sequences to employ the SfM technology since uniform appearance may exist when viewing the remaining solids in the interior of the waste tanks. Although scanning LIDAR appears to be an adequate solution, the expense of the equipment ($80,000-$120,000) and the need for further development to allow tank deployment may prohibit utilizing this technology. The development would include repackaging of equipment to permit deployment through the 4-inch access ports and to keep the equipment relatively uncontaminated to allow use in additional tanks. 3D flash LIDAR has a number of advantages over stereo vision, scanning LIDAR, and SfM, including full frame

  10. Dynamic Vision for Control

    Science.gov (United States)

    2009-02-05

    motion. Intl. .1. of Computer Vision, 68(l):7-25, 2006. [59] R. Vidal. S. Soatto, and S. Sastry. An algebraic geometric approach to the identification...D. Durbin , 10S-3D, INC.). Patents None during the period covered by this grant. AFRL Point of Contact Prof. William M. McEneaney, Program

  11. Integration of Robotics and 3D Visualization to Modernize the Expeditionary Warfare Demonstrator (EWD)

    Science.gov (United States)

    2009-09-01

    PROJECTION .......................................................................99 1. Digital Cinematography ...details ongoing research on multiple cinematography upgrades recommended for X3D that may benefit this work. 1. Digital Cinematography At this year’s... cinematography for EWD scenario playbacks, the Open Computer Vision (OpenCV) libraries were used to modify movie files produced in X3D. Appendix D contains

  12. Multi-view 3D human pose recovery in complex environment

    NARCIS (Netherlands)

    Hofmann, K.M.

    2011-01-01

    The recovery of 3D human pose is an important problem in computer vision with many potential applications in human computer interfaces, motion analysis (e.g. sports, medical) and surveillance. 3D human pose also provides informative, viewinvariant features for a subsequent activity recognition step.

  13. Multi-view 3D human pose recovery in complex environment

    NARCIS (Netherlands)

    Hofmann, K.M.

    2011-01-01

    The recovery of 3D human pose is an important problem in computer vision with many potential applications in human computer interfaces, motion analysis (e.g. sports, medical) and surveillance. 3D human pose also provides informative, viewinvariant features for a subsequent activity recognition step.

  14. Talk to the Hand: Generating a 3D Print from Photographs

    OpenAIRE

    Aboufadel, Edward; Krawczyk, Sylvanna V.; Sherman-Bennett, Melissa

    2015-01-01

    This manuscript presents a linear algebra-based technique that only requires two unique photographs from a digital camera to mathematically construct a 3D surface representation which can then be 3D printed. Basic computer vision theory and manufacturing principles are also briefly discussed.

  15. 3D silicon strip detectors

    Energy Technology Data Exchange (ETDEWEB)

    Parzefall, Ulrich [Physikalisches Institut, Universitaet Freiburg, Hermann-Herder-Str. 3, D-79104 Freiburg (Germany)], E-mail: ulrich.parzefall@physik.uni-freiburg.de; Bates, Richard [University of Glasgow, Department of Physics and Astronomy, Glasgow G12 8QQ (United Kingdom); Boscardin, Maurizio [FBK-irst, Center for Materials and Microsystems, via Sommarive 18, 38050 Povo di Trento (Italy); Dalla Betta, Gian-Franco [INFN and Universita' di Trento, via Sommarive 14, 38050 Povo di Trento (Italy); Eckert, Simon [Physikalisches Institut, Universitaet Freiburg, Hermann-Herder-Str. 3, D-79104 Freiburg (Germany); Eklund, Lars; Fleta, Celeste [University of Glasgow, Department of Physics and Astronomy, Glasgow G12 8QQ (United Kingdom); Jakobs, Karl; Kuehn, Susanne [Physikalisches Institut, Universitaet Freiburg, Hermann-Herder-Str. 3, D-79104 Freiburg (Germany); Lozano, Manuel [Instituto de Microelectronica de Barcelona, IMB-CNM, CSIC, Barcelona (Spain); Pahn, Gregor [Physikalisches Institut, Universitaet Freiburg, Hermann-Herder-Str. 3, D-79104 Freiburg (Germany); Parkes, Chris [University of Glasgow, Department of Physics and Astronomy, Glasgow G12 8QQ (United Kingdom); Pellegrini, Giulio [Instituto de Microelectronica de Barcelona, IMB-CNM, CSIC, Barcelona (Spain); Pennicard, David [University of Glasgow, Department of Physics and Astronomy, Glasgow G12 8QQ (United Kingdom); Piemonte, Claudio; Ronchin, Sabina [FBK-irst, Center for Materials and Microsystems, via Sommarive 18, 38050 Povo di Trento (Italy); Szumlak, Tomasz [University of Glasgow, Department of Physics and Astronomy, Glasgow G12 8QQ (United Kingdom); Zoboli, Andrea [INFN and Universita' di Trento, via Sommarive 14, 38050 Povo di Trento (Italy); Zorzi, Nicola [FBK-irst, Center for Materials and Microsystems, via Sommarive 18, 38050 Povo di Trento (Italy)

    2009-06-01

    While the Large Hadron Collider (LHC) at CERN has started operation in autumn 2008, plans for a luminosity upgrade to the Super-LHC (sLHC) have already been developed for several years. This projected luminosity increase by an order of magnitude gives rise to a challenging radiation environment for tracking detectors at the LHC experiments. Significant improvements in radiation hardness are required with respect to the LHC. Using a strawman layout for the new tracker of the ATLAS experiment as an example, silicon strip detectors (SSDs) with short strips of 2-3 cm length are foreseen to cover the region from 28 to 60 cm distance to the beam. These SSD will be exposed to radiation levels up to 10{sup 15}N{sub eq}/cm{sup 2}, which makes radiation resistance a major concern for the upgraded ATLAS tracker. Several approaches to increasing the radiation hardness of silicon detectors exist. In this article, it is proposed to combine the radiation hard 3D-design originally conceived for pixel-style applications with the benefits of the established planar technology for strip detectors by using SSDs that have regularly spaced doped columns extending into the silicon bulk under the detector strips. The first 3D SSDs to become available for testing were made in the Single Type Column (STC) design, a technological simplification of the original 3D design. With such 3D SSDs, a small number of prototype sLHC detector modules with LHC-speed front-end electronics as used in the semiconductor tracking systems of present LHC experiments were built. Modules were tested before and after irradiation to fluences of 10{sup 15}N{sub eq}/cm{sup 2}. The tests were performed with three systems: a highly focused IR-laser with 5{mu}m spot size to make position-resolved scans of the charge collection efficiency, an Sr{sup 90}{beta}-source set-up to measure the signal levels for a minimum ionizing particle (MIP), and a beam test with 180 GeV pions at CERN. This article gives a brief overview of

  16. View Based Methods can achieve Bayes-Optimal 3D Recognition

    CERN Document Server

    Breuel, Thomas M

    2007-01-01

    This paper proves that visual object recognition systems using only 2D Euclidean similarity measurements to compare object views against previously seen views can achieve the same recognition performance as observers having access to all coordinate information and able of using arbitrary 3D models internally. Furthermore, it demonstrates that such systems do not require more training views than Bayes-optimal 3D model-based systems. For building computer vision systems, these results imply that using view-based or appearance-based techniques with carefully constructed combination of evidence mechanisms may not be at a disadvantage relative to 3D model-based systems. For computational approaches to human vision, they show that it is impossible to distinguish view-based and 3D model-based techniques for 3D object recognition solely by comparing the performance achievable by human and 3D model-based systems.}

  17. Mobile Robot Simultaneous Localization and Mapping Based on a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Songmin Jia

    2016-01-01

    Full Text Available This paper proposes a novel monocular vision-based SLAM (Simultaneous Localization and Mapping algorithm for mobile robot. In this proposed method, the tracking and mapping procedures are split into two separate tasks and performed in parallel threads. In the tracking thread, a ground feature-based pose estimation method is employed to initialize the algorithm for the constraint moving of the mobile robot. And an initial map is built by triangulating the matched features for further tracking procedure. In the mapping thread, an epipolar searching procedure is utilized for finding the matching features. A homography-based outlier rejection method is adopted for rejecting the mismatched features. The indoor experimental results demonstrate that the proposed algorithm has a great performance on map building and verify the feasibility and effectiveness of the proposed algorithm.

  18. Short-term monocular deprivation strengthens the patched eye's contribution to binocular combination.

    Science.gov (United States)

    Zhou, Jiawei; Clavagnier, Simon; Hess, Robert F

    2013-04-18

    Binocularity is a fundamental property of primate vision. Ocular dominance describes the perceptual weight given to the inputs from the two eyes in their binocular combination. There is a distribution of sensory dominance within the normal binocular population with most subjects having balanced inputs while some are dominated by the left eye and some by the right eye. Using short-term monocular deprivation, the sensory dominance can be modulated as, under these conditions, the patched eye's contribution is strengthened. We address two questions: Is this strengthening a general effect such that it is seen for different types of sensory processing? And is the strengthening specific to pattern deprivation, or does it also occur for light deprivation? Our results show that the strengthening effect is a general finding involving a number of sensory functions, and it occurs as a result of both pattern and light deprivation.

  19. Spatial Testing of Dynamic Process and Analysis Technique of Intelligent AC Contactors Based on the Monocular Vision Technology%基于单目视觉技术的智能交流接触器三维动态测试与分析技术

    Institute of Scientific and Technical Information of China (English)

    陈德为; 庄煜祺; 张培铭; 严俊奇

    2014-01-01

    Based on the intelligent AC contactor control system and the auxiliary plane mirror imaging system, a special dynamic process of intelligent AC contactor testing method is proposed by collecting sequence images of the dynamic process of AC contactors with a monocular high-speed camera. By detecting and identifying feature points on moving targets of AC contactors from the image sequence, and dynamically tracking feature point position changes, the action mechanism with dynamic process of intelligent AC contactors is comprehensively tested and analyzed. The measuring technique and the analysis method have far-reaching significance for intelligent control and prototypeoptimization design of AC contactors.%在智能交流接触器智能控制系统和平面镜辅助成像技术的基础上,提出了基于单目高速摄像机采集智能交流接触器动态过程的序列图像,进行智能交流接触器三维动态特性测试的方法。从图像序列中检测识别智能交流接触器运动部件的特征标记点,动态跟踪特征标记点的位姿,从而对智能交流接触器动作机构动态过程进行全方位的测试与分析。该测试技术与分析方法对智能交流接触器运动的智能控制、样机优化设计的研究意义重大。

  20. 3D Wide FOV Scanning Measurement System Based on Multiline Structured-Light Sensors

    OpenAIRE

    2014-01-01

    Structured-light three-dimensional (3D) vision measurement is currently one of the most common approaches to obtain 3D surface data. However, the existing structured-light scanning measurement systems are primarily constructed on the basis of single sensor, which inevitably generates three obvious problems: limited measurement range, blind measurement area, and low scanning efficiency. To solve these problems, we developed a novel 3D wide FOV scanning measurement system which adopted two mult...

  1. 3D laser imaging for concealed object identification

    Science.gov (United States)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  2. Image based 3D city modeling : Comparative study

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  3. 3D Image Sensor based on Parallax Motion

    Directory of Open Access Journals (Sweden)

    Barna Reskó

    2007-12-01

    Full Text Available For humans and visual animals vision it is the primary and the most sophisticatedperceptual modality to get information about the surrounding world. Depth perception is apart of vision allowing to accurately determine the distance to an object which makes it animportant visual task. Humans have two eyes with overlapping visual fields that enablestereo vision and thus space perception. Some birds however do not have overlappingvisual fields, and compensate this lask by moving their heads, which in turn makes spaceperception possible using the motion parallax as a visual cue. This paper presents asolution using an opto-mechanical filter that was inspired by the way birds observe theirenvironment. The filtering is done using two different approaches:using motion blur duringmotion parallax, and using the optical flow algorithm. The two methods have differentadvantages and drawbacks, which will be discussed in the paper. The proposed system canbe used in robotics for 3D space perception.

  4. Separating monocular and binocular neural mechanisms mediating chromatic contextual interactions.

    Science.gov (United States)

    D'Antona, Anthony D; Christiansen, Jens H; Shevell, Steven K

    2014-04-17

    When seen in isolation, a light that varies in chromaticity over time is perceived to oscillate in color. Perception of that same time-varying light may be altered by a surrounding light that is also temporally varying in chromaticity. The neural mechanisms that mediate these contextual interactions are the focus of this article. Observers viewed a central test stimulus that varied in chromaticity over time within a larger surround that also varied in chromaticity at the same temporal frequency. Center and surround were presented either to the same eye (monocular condition) or to opposite eyes (dichoptic condition) at the same frequency (3.125, 6.25, or 9.375 Hz). Relative phase between center and surround modulation was varied. In both the monocular and dichoptic conditions, the perceived modulation depth of the central light depended on the relative phase of the surround. A simple model implementing a linear combination of center and surround modulation fit the measurements well. At the lowest temporal frequency (3.125 Hz), the surround's influence was virtually identical for monocular and dichoptic conditions, suggesting that at this frequency, the surround's influence is mediated primarily by a binocular neural mechanism. At higher frequencies, the surround's influence was greater for the monocular condition than for the dichoptic condition, and this difference increased with temporal frequency. Our findings show that two separate neural mechanisms mediate chromatic contextual interactions: one binocular and dominant at lower temporal frequencies and the other monocular and dominant at higher frequencies (6-10 Hz).

  5. The effect of contrast on monocular versus binocular reading performance.

    Science.gov (United States)

    Johansson, Jan; Pansell, Tony; Ygge, Jan; Seimyr, Gustaf Öqvist

    2014-05-14

    The binocular advantage in reading performance is typically small. On the other hand research shows binocular reading to be remarkably robust to degraded stimulus properties. We hypothesized that this robustness may stem from an increasing binocular contribution. The main objective was to compare monocular and binocular performance at different stimulus contrasts and assess the level of binocular superiority. A secondary objective was to assess any asymmetry in performance related to ocular dominance. In a balanced repeated measures experiment 18 subjects read texts at three levels of contrast monocularly and binocularly while their eye movements were recorded. The binocular advantage increased with reduced contrast producing a 7% slower monocular reading at 40% contrast, 9% slower at 20% contrast, and 21% slower at 10% contrast. A statistically significant interaction effect was found in fixation duration displaying a more adverse effect in the monocular condition at lowest contrast. No significant effects of ocular dominance were observed. The outcome suggests that binocularity contributes increasingly to reading performance as stimulus contrast decreases. The strongest difference between monocular and binocular performance was due to fixation duration. The findings may pose a clinical point that it may be necessary to consider tests at different contrast levels when estimating reading performance. © 2014 ARVO.

  6. Interactive 3D Mars Visualization

    Science.gov (United States)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  7. Wireless 3D Chocolate Printer

    Directory of Open Access Journals (Sweden)

    FROILAN G. DESTREZA

    2014-02-01

    Full Text Available This study is for the BSHRM Students of Batangas State University (BatStateU ARASOF for the researchers believe that the Wireless 3D Chocolate Printer would be helpful in their degree program especially on making creative, artistic, personalized and decorative chocolate designs. The researchers used the Prototyping model as procedural method for the successful development and implementation of the hardware and software. This method has five phases which are the following: quick plan, quick design, prototype construction, delivery and feedback and communication. This study was evaluated by the BSHRM Students and the assessment of the respondents regarding the software and hardware application are all excellent in terms of Accuracy, Effecitveness, Efficiency, Maintainability, Reliability and User-friendliness. Also, the overall level of acceptability of the design project as evaluated by the respondents is excellent. With regard to the observation about the best raw material to use in 3D printing, the chocolate is good to use as the printed material is slightly distorted,durable and very easy to prepare; the icing is also good to use as the printed material is not distorted and is very durable but consumes time to prepare; the flour is not good as the printed material is distorted, not durable but it is easy to prepare. The computation of the economic viability level of 3d printer with reference to ROI is 37.14%. The recommendation of the researchers in the design project are as follows: adding a cooling system so that the raw material will be more durable, development of a more simplified version and improving the extrusion process wherein the user do not need to stop the printing process just to replace the empty syringe with a new one.

  8. Robust 3D reconstruction system for human jaw modeling

    Science.gov (United States)

    Yamany, Sameh M.; Farag, Aly A.; Tazman, David; Farman, Allan G.

    1999-03-01

    This paper presents a model-based vision system for dentistry that will replace traditional approaches used in diagnosis, treatment planning and surgical simulation. Dentistry requires accurate 3D representation of the teeth and jaws for many diagnostic and treatment purposes. For example orthodontic treatment involves the application of force systems to teeth over time to correct malocclusion. In order to evaluate tooth movement progress, the orthodontists monitors this movement by means of visual inspection, intraoral measurements, fabrication of plastic models, photographs and radiographs, a process which is both costly and time consuming. In this paper an integrate system has been developed to record the patient's occlusion using computer vision. Data is acquired with an intraoral video camera. A modified shape from shading (SFS) technique, using perspective projection and camera calibration, is used to extract accurate 3D information from a sequence of 2D images of the jaw. A new technique for 3D data registration, using a Grid Closest Point transform and genetic algorithms, is used to register the SFS output. Triangulization is then performed, and a solid 3D model is obtained via a rapid prototype machine.

  9. 3D scene reconstruction: why, when, and how?

    Science.gov (United States)

    McBride, Jonah C.; Snorrason, Magnus S.; Goodsell, Thomas R.; Eaton, Ross S.; Stevens, Mark R.

    2004-09-01

    Mobile robot designers frequently look to computer vision to solve navigation, obstacle avoidance, and object detection problems. Potential solutions using low-cost video cameras are particularly alluring. Recent results in 3D scene reconstruction from a single moving camera seem particularly relevant, but robot designers who attempt to use such 3D techniques have uncovered a variety of practical concerns. We present lessons-learned from developing a single-camera 3D scene reconstruction system that provides both a real-time camera motion estimate and a rough model of major 3D structures in the robot"s vicinity. Our objective is to use the motion estimate to supplement GPS (indoors in particular) and to use the model to provide guidance for further vision processing (look for signs on walls, obstacles on the ground, etc.). The computational geometry involved is closely related to traditional two-camera stereo, however a number of degenerate cases exist. We also demonstrate how SFM can use used to improve the performance of two specific robot navigation tasks.

  10. A 3D surface imaging system for assessing human obesity

    Science.gov (United States)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  11. Virtual 3-D Facial Reconstruction

    Directory of Open Access Journals (Sweden)

    Martin Paul Evison

    2000-06-01

    Full Text Available Facial reconstructions in archaeology allow empathy with people who lived in the past and enjoy considerable popularity with the public. It is a common misconception that facial reconstruction will produce an exact likeness; a resemblance is the best that can be hoped for. Research at Sheffield University is aimed at the development of a computer system for facial reconstruction that will be accurate, rapid, repeatable, accessible and flexible. This research is described and prototypical 3-D facial reconstructions are presented. Interpolation models simulating obesity, ageing and ethnic affiliation are also described. Some strengths and weaknesses in the models, and their potential for application in archaeology are discussed.

  12. How 3-D Movies Work

    Institute of Scientific and Technical Information of China (English)

    吕铁雄

    2011-01-01

    难度:★★★★☆词数:450 建议阅读时间:8分钟 Most people see out of two eyes. This is a basic fact of humanity,but it’s what makes possible the illusion of depth(纵深幻觉) that 3-D movies create. Human eyes are spaced about two inches apart, meaning that each eye gives the brain a slightly different perspective(透视感)on the same object. The brain then uses this variance to quickly determine an object’s distance.

  13. Using Stereoscopic 3D Technologies for the Diagnosis and Treatment of Amblyopia in Children

    CERN Document Server

    Gargantini, Angelo

    2011-01-01

    The 3D4Amb project aims at developing a system based on the stereoscopic 3D techonlogy, like the NVIDIA 3D Vision, for the diagnosis and treatment of amblyopia in young children. It exploits the active shutter technology to provide binocular vision, i.e. to show different images to the amblyotic (or lazy) and the normal eye. It would allow easy diagnosis of amblyopia and its treatment by means of interactive games or other entertainment activities. It should not suffer from the compliance problems of the classical treatment, it is suitable to domestic use, and it could at least partially substitute occlusion or patching of the normal eye.

  14. Positional Awareness Map 3D (PAM3D)

    Science.gov (United States)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  15. Depth cues versus the simplicity principle in 3D shape perception.

    Science.gov (United States)

    Li, Yunfeng; Pizlo, Zygmunt

    2011-10-01

    Two experiments were performed to explore the mechanisms of human 3D shape perception. In Experiment 1, the subjects' performance in a shape constancy task in the presence of several cues (edges, binocular disparity, shading and texture) was tested. The results show that edges and binocular disparity, but not shading or texture, are important in 3D shape perception. Experiment 2 tested the effect of several simplicity constraints, such as symmetry and planarity on subjects' performance in a shape constancy task. The 3D shapes were represented by edges or vertices only. The results show that performance with or without binocular disparity is at chance level, unless the 3D shape is symmetric and/or its faces are planar. In both experiments, there was a correlation between the subjects' performance with and without binocular disparity. Our study suggests that simplicity constraints, not depth cues, play the primary role in both monocular and binocular 3D shape perception. These results are consistent with our computational model of 3D shape recovery. Copyright © 2011 Cognitive Science Society, Inc.

  16. Efficient and high speed depth-based 2D to 3D video conversion

    Science.gov (United States)

    Somaiya, Amisha Himanshu; Kulkarni, Ramesh K.

    2013-09-01

    Stereoscopic video is the new era in video viewing and has wide applications such as medicine, satellite imaging and 3D Television. Such stereo content can be generated directly using S3D cameras. However, this approach requires expensive setup and hence converting monoscopic content to S3D becomes a viable approach. This paper proposes a depth-based algorithm for monoscopic to stereoscopic video conversion by using the y axis co-ordinates of the bottom-most pixels of foreground objects. This code can be used for arbitrary videos without prior database training. It does not face the limitations of single monocular depth cues nor does it combine depth cues, thus consuming less processing time without affecting the efficiency of the 3D video output. The algorithm, though not comparable to real-time, is faster than the other available 2D to 3D video conversion techniques in the average ratio of 1:8 to 1:20, essentially qualifying as high-speed. It is an automatic conversion scheme, hence directly gives the 3D video output without human intervention and with the above mentioned features becomes an ideal choice for efficient monoscopic to stereoscopic video conversion. [Figure not available: see fulltext.

  17. Hazard detection with a monocular bioptic telescope.

    Science.gov (United States)

    Doherty, Amy L; Peli, Eli; Luo, Gang

    2015-09-01

    The safety of bioptic telescopes for driving remains controversial. The ring scotoma, an area to the telescope eye due to the telescope magnification, has been the main cause of concern. This study evaluates whether bioptic users can use the fellow eye to detect in hazards driving videos that fall in the ring scotoma area. Twelve visually impaired bioptic users watched a series of driving hazard perception training videos and responded as soon as they detected a hazard while reading aloud letters presented on the screen. The letters were placed such that when reading them through the telescope the hazard fell in the ring scotoma area. Four conditions were tested: no bioptic and no reading, reading without bioptic, reading with a bioptic that did not occlude the fellow eye (non-occluding bioptic), and reading with a bioptic that partially-occluded the fellow eye. Eight normally sighted subjects performed the same task with the partially occluding bioptic detecting lateral hazards (blocked by the device scotoma) and vertical hazards (outside the scotoma) to further determine the cause-and-effect relationship between hazard detection and the fellow eye. There were significant differences in performance between conditions: 83% of hazards were detected with no reading task, dropping to 67% in the reading task with no bioptic, to 50% while reading with the non-occluding bioptic, and 34% while reading with the partially occluding bioptic. For normally sighted, detection of vertical hazards (53%) was significantly higher than lateral hazards (38%) with the partially occluding bioptic. Detection of driving hazards is impaired by the addition of a secondary reading like task. Detection is further impaired when reading through a monocular telescope. The effect of the partially-occluding bioptic supports the role of the non-occluded fellow eye in compensating for the ring scotoma. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.

  18. 3D Printable Graphene Composite

    Science.gov (United States)

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C-1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  19. 3D medical thermography device

    Science.gov (United States)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  20. 3D Printed Bionic Ears

    Science.gov (United States)

    Mannoor, Manu S.; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A.; Soboyejo, Winston O.; Verma, Naveen; Gracias, David H.; McAlpine, Michael C.

    2013-01-01

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the precise anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097